{"id":35,"date":"2014-08-07T16:51:00","date_gmt":"2014-08-07T14:51:00","guid":{"rendered":""},"modified":"2015-11-18T23:21:00","modified_gmt":"2015-11-18T22:21:00","slug":"cloning-pluggable-database-with-custom-snapshot","status":"publish","type":"post","link":"https:\/\/ilmarkerm.eu\/blog\/2014\/08\/cloning-pluggable-database-with-custom-snapshot\/","title":{"rendered":"Cloning pluggable database with custom snapshot"},"content":{"rendered":"<p>When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space. But it works only when datafiles are placed on ACFS, ZFS Appliance over DNFS or Netapp (Oracle Multitenant Pluggable Database Snapshot Cloning: Use Cases and Supported Platforms Doc ID 1597027.1) and you also cannot unplug\/drop the source PDB and you also cannot unplug the clone PDB.<\/p>\n<p>Recently I&#8217;ve started to like BTRFS (only on test systems!) and it also has builting snapshot capabilities. In this article I&#8217;ll try to use BTRFS snapshots to create pluggable database clones. It is not as automated as Oracle builtin snapshots, but possible.<\/p>\n<p>What I have before cloning:<\/p>\n<pre><code class=\"sql\">SQL&gt; select name, open_mode, guid from v$pdbs;\n\nNAME                           OPEN_MODE  GUID\n------------------------------ ---------- --------------------------------\nPDB$SEED                       READ ONLY  FFCDD773320B2767E0432700FA0A06D0\nDB1                            MOUNTED    FFCC6C3F473E124BE0432700FA0A429E\nDB2                            READ WRITE 000A7D190B3C15A1E0532700FA0A3C10\n\nSQL&gt; alter session set container=db2;\n\nSession altered.\n\nSQL&gt; select round(bytes\/1024\/1024) mb, file_name from dba_data_files;\n\n        MB FILE_NAME\n---------- -------------\n       280 \/u01\/db2\/system01.dbf\n       690 \/u01\/db2\/sysaux01.dbf\n         5 \/u01\/db2\/users01.dbf\n       100 \/u01\/db2\/tdata01.dbf\n      2048 \/u01\/db2\/tlarge01.dbf\n<\/code><\/pre>\n<p>My goal is to clone DB2 to new PDB called DB3. Currently my DB2 is stored under \/u01\/db2\/ that is a dedicated subvolume under BTRFS filesystem \/u01. The file system is currently using 21.23GB.<\/p>\n<pre><code class=\"sql\">[root@cdbtest u01]# btrfs subvolume list -a -t \/u01\nID      gen     top level       path\n--      ---     ---------       ----\n256     1543    5               app\n281     736     5               app_bak\n285     1545    5               data\n286     1297    5               data_bak\n287     1544    5               db2\n289     1333    5               db2_bak\n\n[root@cdbtest u01]# ls -l \/u01\/db2\ntotal 3198048\n-rw-r----- 1 oracle oinstall  723525632 Aug  7 17:40 sysaux01.dbf\n-rw-r----- 1 oracle oinstall  293609472 Aug  7 17:42 system01.dbf\n-rw-r----- 1 oracle oinstall  104865792 Aug  7 17:21 tdata01.dbf\n-rw-r----- 1 oracle oinstall   20979712 Aug  7 16:56 temp01.dbf\n-rw-r----- 1 oracle oinstall 2147491840 Aug  7 17:21 tlarge01.dbf\n-rw-r----- 1 oracle oinstall    5251072 Aug  7 17:21 users01.dbf\n\n[root@cdbtest u01]# btrfs filesystem df \/u01\nData: total=29.01GB, used=21.23GB\nSystem, DUP: total=8.00MB, used=4.00KB\nSystem: total=4.00MB, used=0.00\nMetadata, DUP: total=1.00GB, used=124.73MB\nMetadata: total=8.00MB, used=0.00\n<\/code><\/pre>\n<p>Now, lets start. First need to open DB2 read only and create description XML file from it:<\/p>\n<pre><code class=\"sql\">SQL&gt; exec dbms_pdb.describe(pdb_descr_file=&gt;'\/u01\/db2_pdb.xml');\n\nPL\/SQL procedure successfully completed.\n<\/code><\/pre>\n<p>And create a BTRFS snapshot under directory \/u01\/db3<\/p>\n<pre><code class=\"sql\">[root@cdbtest u01]# btrfs subvolume snapshot db2 db3\nCreate a snapshot of 'db2' in '.\/db3'\n\n[root@cdbtest u01]# btrfs filesystem df \/u01\nData: total=29.01GB, used=21.23GB\nSystem, DUP: total=8.00MB, used=4.00KB\nSystem: total=4.00MB, used=0.00\nMetadata, DUP: total=1.00GB, used=124.75MB\nMetadata: total=8.00MB, used=0.00\n\n[root@cdbtest u01]# ls -l\ntotal 8\ndrwxr-xr-x 1 oracle oinstall   50 Aug  4 11:55 app\ndrwxr-xr-x 1 oracle oinstall   50 Aug  4 11:55 app_bak\ndrwxrwxr-x 1 oracle oinstall   60 Aug  5 15:16 data\ndrwxrwxr-x 1 oracle oinstall   60 Aug  5 15:16 data_bak\ndrwxr-xr-x 1 oracle oinstall  136 Aug  7 17:53 db2\ndrwxr-xr-x 1 oracle oinstall  112 Aug  7 15:47 db2_bak\n-rw-r--r-- 1 oracle oinstall 6370 Aug  7 17:53 db2_pdb.xml\ndrwxr-xr-x 1 oracle oinstall  136 Aug  7 17:53 db3\n\n[root@cdbtest u01]# ls -l db3\ntotal 3198048\n-rw-r----- 1 oracle oinstall  723525632 Aug  7 17:49 sysaux01.dbf\n-rw-r----- 1 oracle oinstall  293609472 Aug  7 17:49 system01.dbf\n-rw-r----- 1 oracle oinstall  104865792 Aug  7 17:49 tdata01.dbf\n-rw-r----- 1 oracle oinstall   20979712 Aug  7 16:56 temp01.dbf\n-rw-r----- 1 oracle oinstall 2147491840 Aug  7 17:49 tlarge01.dbf\n-rw-r----- 1 oracle oinstall    5251072 Aug  7 17:49 users01.dbf\n<\/code><\/pre>\n<p>I have a snapshot now and no additional space used! \ud83d\ude42 Lets try to connect it as a new PDB.<\/p>\n<pre><code class=\"sql\">SQL&gt; alter session set container=cdb$root;\n\nSession altered.\n\nSQL&gt; create pluggable database db3 using '\/u01\/db2_pdb.xml' source_file_name_convert=('\/u01\/db2\/','\/u01\/db3\/') nocopy tempfile reuse;\ncreate pluggable database db3 using '\/u01\/db2_pdb.xml' source_file_name_convert=('\/u01\/db2\/','\/u01\/db3\/') nocopy tempfile reuse\n*\nERROR at line 1:\nORA-65122: Pluggable database GUID conflicts with the GUID of an existing container.\n<\/code><\/pre>\n<p>Simple plug in does not work, since the database already has a PDB with the same GUID. Create pluggable database command has a AS CLONE clause to generate new GUID and DBID values.<\/p>\n<pre><code class=\"sql\">SQL&gt; create pluggable database db3 AS CLONE using '\/u01\/db2_pdb.xml' source_file_name_convert=('\/u01\/db2\/','\/u01\/db3\/') nocopy tempfile reuse;\n\nPluggable database created.\n\nSQL&gt; select name, open_mode, guid from v$pdbs;\n\nNAME                           OPEN_MODE  GUID\n------------------------------ ---------- --------------------------------\nPDB$SEED                       READ ONLY  FFCDD773320B2767E0432700FA0A06D0\nDB1                            MOUNTED    FFCC6C3F473E124BE0432700FA0A429E\nDB2                            READ ONLY  000A7D190B3C15A1E0532700FA0A3C10\nDB3                            MOUNTED    000C54AE44352331E0532700FA0AFA69\n\nSQL&gt; alter session set container=db3;\n\nSession altered.\n\nSQL&gt; select name from v$datafile;\n\nNAME\n----------------------------------------------------\n\/u01\/data\/CDB2\/datafile\/o1_mf_undotbs1_9xyyw5dc_.dbf\n\/u01\/db3\/system01.dbf\n\/u01\/db3\/sysaux01.dbf\n\/u01\/db3\/users01.dbf\n\/u01\/db3\/tdata01.dbf\n\/u01\/db3\/tlarge01.dbf\n\n6 rows selected.\n<\/code><\/pre>\n<p>Looks good from the DB side (ignore the UNDO datafile, this is CDB undo datafile, since PDB does not have its own). How does the storage usage look?<\/p>\n<pre><code class=\"sql\">[root@cdbtest u01]# btrfs filesystem df \/u01\nData: total=29.01GB, used=21.23GB\nSystem, DUP: total=8.00MB, used=4.00KB\nSystem: total=4.00MB, used=0.00\nMetadata, DUP: total=1.00GB, used=124.87MB\nMetadata: total=8.00MB, used=0.00\n<\/code><\/pre>\n<p>Still 21.23GB. Lets open both DB2 and DB3 and see what happens.<\/p>\n<pre><code class=\"sql\">SQL&gt; alter session set container=cdb$root;\n\nSession altered.\n\nSQL&gt; alter pluggable database db2 close;\n\nPluggable database altered.\n\nSQL&gt; alter pluggable database db2 open;\n\nPluggable database altered.\n\nSQL&gt; alter pluggable database db3 open;\n\nPluggable database altered.\n\nSQL&gt; select name, open_mode, guid from v$pdbs;\n\nNAME                           OPEN_MODE  GUID\n------------------------------ ---------- --------------------------------\nPDB$SEED                       READ ONLY  FFCDD773320B2767E0432700FA0A06D0\nDB1                            MOUNTED    FFCC6C3F473E124BE0432700FA0A429E\nDB2                            READ WRITE 000A7D190B3C15A1E0532700FA0A3C10\nDB3                            READ WRITE 000C54AE44352331E0532700FA0AFA69\n<\/code><\/pre>\n<pre><code class=\"sql\">[root@cdbtest u01]# btrfs filesystem df \/u01\nData: total=29.01GB, used=21.23GB\nSystem, DUP: total=8.00MB, used=4.00KB\nSystem: total=4.00MB, used=0.00\nMetadata, DUP: total=1.00GB, used=124.90MB\nMetadata: total=8.00MB, used=0.00\n<\/code><\/pre>\n<p>Still no additional storage used by the snapshot. I&#8217;ll modify some data in DB3 tables, then it must consume additional space.<\/p>\n<pre><code class=\"sql\">SQL&gt; alter session set container=db3;\n\nSession altered.\n\nSQL&gt; select count(*) from t1.large;\n\n  COUNT(*)\n----------\n   1897686\n\nSQL&gt; desc t1.large\n Name                                                  Null?    Type\n ----------------------------------------------------- -------- ------------------------------------\n OWNER                                                 NOT NULL VARCHAR2(128)\n OBJECT_NAME                                           NOT NULL VARCHAR2(128)\n SUBOBJECT_NAME                                                 VARCHAR2(128)\n OBJECT_ID                                             NOT NULL NUMBER\n DATA_OBJECT_ID                                                 NUMBER\n OBJECT_TYPE                                                    VARCHAR2(23)\n CREATED                                               NOT NULL DATE\n LAST_DDL_TIME                                         NOT NULL DATE\n TIMESTAMP                                                      VARCHAR2(19)\n STATUS                                                         VARCHAR2(7)\n TEMPORARY                                                      VARCHAR2(1)\n GENERATED                                                      VARCHAR2(1)\n SECONDARY                                                      VARCHAR2(1)\n NAMESPACE                                             NOT NULL NUMBER\n EDITION_NAME                                                   VARCHAR2(128)\n SHARING                                                        VARCHAR2(13)\n EDITIONABLE                                                    VARCHAR2(1)\n ORACLE_MAINTAINED                                              VARCHAR2(1)\n\nSQL&gt; update t1.large set owner = substr(owner, 1, 2);\n\n1897686 rows updated.\n\nSQL&gt; commit;\n\nCommit complete.\n\nSQL&gt; alter system checkpoint;\n\nSystem altered.\n\nSQL&gt; select round(bytes\/1024\/1024) mb from dba_segments where owner='T1' and segment_name = 'LARGE';\n\n        MB\n----------\n       256\n<\/code><\/pre>\n<pre><code class=\"sql\">[root@cdbtest u01]# btrfs filesystem df \/u01\nData: total=29.01GB, used=21.78GB\nSystem, DUP: total=8.00MB, used=4.00KB\nSystem: total=4.00MB, used=0.00\nMetadata, DUP: total=1.00GB, used=130.33MB\nMetadata: total=8.00MB, used=0.00\n<\/code><\/pre>\n<p>I tried to change every block on that table. Table is 256MB and 21.78-21.23=0.55GB was changed on the disk. Just for a test, lets try the update again (just to see how BTRFS snapshots work a little).<\/p>\n<pre><code class=\"sql\">SQL&gt; update t1.large set owner = 'AX';\n\n1897686 rows updated.\n\nSQL&gt; commit;\n\nCommit complete.\n\nSQL&gt; alter system checkpoint;\n\nSystem altered.\n<\/code><\/pre>\n<pre><code class=\"sql\">[root@cdbtest u01]# btrfs filesystem df \/u01\nData: total=29.01GB, used=22.00GB\nSystem, DUP: total=8.00MB, used=4.00KB\nSystem: total=4.00MB, used=0.00\nMetadata, DUP: total=1.00GB, used=132.34MB\nMetadata: total=8.00MB, used=0.00\n<\/code><\/pre>\n<p>I should have modified the same blocks, but BTRFS used another 0.22GB of disk space. Need to look more into how BTRFS snapshots work. But here you have it, PDB snapshot cloning without restrictions \ud83d\ude42<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space. But it works only when datafiles are placed on ACFS, ZFS Appliance over DNFS or Netapp (Oracle Multitenant Pluggable Database Snapshot Cloning: Use Cases and Supported Platforms Doc ID 1597027.1) and you also [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[36,5,4],"class_list":["post-35","post","type-post","status-publish","format-standard","hentry","category-blog-entry","tag-btrfs","tag-linux","tag-oracle"],"_links":{"self":[{"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/posts\/35","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/comments?post=35"}],"version-history":[{"count":2,"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/posts\/35\/revisions"}],"predecessor-version":[{"id":155,"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/posts\/35\/revisions\/155"}],"wp:attachment":[{"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/media?parent=35"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/categories?post=35"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ilmarkerm.eu\/blog\/wp-json\/wp\/v2\/tags?post=35"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}