- Written by: ilmarkerm
- Category: Blog entry
- Published: August 29, 2014
12c gave us two new features among others: DBMS_UTILITY.EXPAND_SQL_TEXT and temporal validity. DBMS_UTILITY.EXPAND_SQL_TEXT is advertised as a means to expand SQL text that is using views, but it is also very useful to see how Oracle internally rewrites some SQL queries. Here I’m going to use it to see how temporal validity queries are actually executed. All tests are done using 12.1.0.2 Enterprise Edition.
First I’ll create a table for testing with two periods – VALID and ACTIVE. VALID is using DATE datatype and ACTIVE is using hidden TIMESTAMP columns.
CREATE TABLE temporal
(
id NUMBER PRIMARY KEY,
t VARCHAR2 (100),
valid_start DATE,
valid_end DATE,
PERIOD FOR valid (valid_start, valid_end)
);
ALTER TABLE temporal
ADD period FOR active;
SQL> select column_name, data_type, hidden_column from user_tab_cols where table_name = 'TEMPORAL' order by column_id;
COLUMN_NAME DATA_TYPE HID
-------------------- ------------------------------ ---
ID NUMBER NO
T VARCHAR2 NO
VALID_START DATE NO
VALID_END DATE NO
ACTIVE NUMBER YES
ACTIVE_START TIMESTAMP(6) WITH TIME ZONE YES
ACTIVE_END TIMESTAMP(6) WITH TIME ZONE YES
VALID NUMBER YES
8 rows selected.
First syntax that can be used to query temporal data is select id, t from temporal AS OF PERIOD FOR valid TIMESTAMP. Lets use DBMS_UTILITY.EXPAND_SQL_TEXT to look how Oracle actually executes this query (result is formatted for better readability using Toad).
declare
c clob;
begin
DBMS_UTILITY.EXPAND_SQL_TEXT(q'[select id, t from temporal as of period for valid DATE'2013-02-01']', c);
dbms_output.put_line(c);
end;
/
SELECT "A1"."ID" "ID", "A1"."T" "T"
FROM (SELECT "A2"."ID" "ID",
"A2"."T" "T",
"A2"."VALID_START" "VALID_START",
"A2"."VALID_END" "VALID_END"
FROM "DBAUSER"."TEMPORAL" "A2"
WHERE ("A2"."VALID_START" IS NULL OR "A2"."VALID_START" <= TO_DATE (' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
AND ("A2"."VALID_END" IS NULL OR "A2"."VALID_END" > TO_DATE (' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))) "A1"
No surprises here, Oracle just replaced AS OF PERIOD clause with corresponding WHERE predicates. Another possibility to set the validity period is using DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME. Then select id, t from temporal is turned to:
SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('CURRENT');
declare
c clob;
begin
DBMS_UTILITY.EXPAND_SQL_TEXT(q'[select id, t from temporal]', c);
dbms_output.put_line(c);
end;
/
SELECT "A1"."ID" "ID", "A1"."T" "T"
FROM (SELECT "A2"."ID" "ID",
"A2"."T" "T",
"A2"."VALID_START" "VALID_START",
"A2"."VALID_END" "VALID_END"
FROM "DBAUSER"."TEMPORAL" "A2"
WHERE ("A2"."VALID_START" IS NULL OR "A2"."VALID_START" <= SYSTIMESTAMP (6))
AND ("A2"."VALID_END" IS NULL OR "A2"."VALID_END" > SYSTIMESTAMP (6))
AND ("A2"."ACTIVE_START" IS NULL OR "A2"."ACTIVE_START" <= SYSTIMESTAMP (6))
AND ("A2"."ACTIVE_END" IS NULL OR "A2"."ACTIVE_END" > SYSTIMESTAMP (6))) "A1"
Two interesting things to note there. First DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME does not currently have a possibility to name the period, so the restriction is applied to ALL periods defined on the table. Another thing, columns valid_start and valid_end were created as DATE, but values are set as TIMESTAMP, making index use impossible (DATE is automatically converted to TIMESTAMP, not the other way around). Not really a problem, just a thing to take a note of, since the default hidden period columns are created as TIMESTAMP, not DATE.
- Written by: ilmarkerm
- Category: Blog entry
- Published: August 18, 2014
Before 11g if you added a new column to a table then Oracle had to lock and physically update all rows in the table. Possibly a painful process 🙂 11g helped with that a little, giving metadata only default values for NOT NULL columns, so the change was done only in data dictionary, not in the table data.
SQL> create table default_values (
2 id number primary key
3 );
Table created.
SQL> insert into default_values values (1);
1 row created.
SQL> alter table default_values add def_w_not_null varchar2(20) default 'THIS IS DEFAULT' not null;
Table altered.
SQL> select * from default_values;
ID DEF_W_NOT_NULL
---------- --------------------
1 THIS IS DEFAULT
SQL> select * from table(dbms_xplan.display_cursor(null,null,'projection'));
...
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "DEFAULT_VALUES"."ID"[NUMBER,22],
NVL("DEFAULT_VALUES"."DEF_W_NOT_NULL",'THIS IS DEFAULT')[20]
...
So internally Oracle just rewrites the columns with the following expression: NVL(“DEFAULT_VALUES”.”DEF_W_NOT_NULL”,’THIS IS DEFAULT’). Makes sense 🙂
12c now adds the possibility to add metadata only default value for NULL columns also. Lets see what happens:
SQL> alter table default_values add def_w_null varchar2(20) default 'THIS IS DEFAULT2';
Table altered.
SQL> select id, def_w_null from default_values;
ID DEF_W_NULL
---------- --------------------
1 THIS IS DEFAULT2
SQL> select * from table(dbms_xplan.display_cursor(null,null,'projection'));
...
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "ID"[NUMBER,22], DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00003$",0)),NULL
,NVL("DEF_W_NULL",'THIS IS DEFAULT2'),'0',NVL("DEF_W_NULL",'THIS IS
DEFAULT2'),'1',"DEF_W_NULL")[20]
We have much more complex expression now: DECODE(TO_CHAR(SYS_OP_VECBIT(“SYS_NC00003$”,0)),NULL,NVL(“DEF_W_NULL”,’THIS IS DEFAULT2′),’0′,NVL(“DEF_W_NULL”,’THIS IS DEFAULT2′),’1′,”DEF_W_NULL”). And also a new hidden column SYS_NC00003$. What is in there?
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
SQL> insert into default_values (id, DEF_W_NULL) values (2, null);
1 row created.
SQL> commit;
Commit complete.
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
2 is not null 01
SQL> insert into default_values (id, DEF_W_NULL) values (3, 'some value');
1 row created.
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
2 is not null 01
3 some value is not null 01
SQL> insert into default_values (id) values (4);
1 row created.
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
2 is not null 01
3 some value is not null 01
4 THIS IS DEFAULT2 is not null 01
This new column seems to set a bit, whether the value in the column “should be trusted” or replaced with a default. So, all the rows that existed before the new column was added have NULL is this new column, but all the newly modified rows set a specific bit to 1. Looking at the original expression, then there is also a possibility that this bit is set to 0, maybe this newly added column can contain other “row status bits” also?
DECODE (TO_CHAR (sys_op_vecbit ("SYS_NC00003$", 0)),
NULL, NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
'0', NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
'1', "DEF_W_NULL"
)
- Written by: ilmarkerm
- Category: Blog entry
- Published: August 7, 2014
When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space. But it works only when datafiles are placed on ACFS, ZFS Appliance over DNFS or Netapp (Oracle Multitenant Pluggable Database Snapshot Cloning: Use Cases and Supported Platforms Doc ID 1597027.1) and you also cannot unplug/drop the source PDB and you also cannot unplug the clone PDB.
Recently I’ve started to like BTRFS (only on test systems!) and it also has builting snapshot capabilities. In this article I’ll try to use BTRFS snapshots to create pluggable database clones. It is not as automated as Oracle builtin snapshots, but possible.
What I have before cloning:
SQL> select name, open_mode, guid from v$pdbs;
NAME OPEN_MODE GUID
------------------------------ ---------- --------------------------------
PDB$SEED READ ONLY FFCDD773320B2767E0432700FA0A06D0
DB1 MOUNTED FFCC6C3F473E124BE0432700FA0A429E
DB2 READ WRITE 000A7D190B3C15A1E0532700FA0A3C10
SQL> alter session set container=db2;
Session altered.
SQL> select round(bytes/1024/1024) mb, file_name from dba_data_files;
MB FILE_NAME
---------- -------------
280 /u01/db2/system01.dbf
690 /u01/db2/sysaux01.dbf
5 /u01/db2/users01.dbf
100 /u01/db2/tdata01.dbf
2048 /u01/db2/tlarge01.dbf
My goal is to clone DB2 to new PDB called DB3. Currently my DB2 is stored under /u01/db2/ that is a dedicated subvolume under BTRFS filesystem /u01. The file system is currently using 21.23GB.
[root@cdbtest u01]# btrfs subvolume list -a -t /u01
ID gen top level path
-- --- --------- ----
256 1543 5 app
281 736 5 app_bak
285 1545 5 data
286 1297 5 data_bak
287 1544 5 db2
289 1333 5 db2_bak
[root@cdbtest u01]# ls -l /u01/db2
total 3198048
-rw-r----- 1 oracle oinstall 723525632 Aug 7 17:40 sysaux01.dbf
-rw-r----- 1 oracle oinstall 293609472 Aug 7 17:42 system01.dbf
-rw-r----- 1 oracle oinstall 104865792 Aug 7 17:21 tdata01.dbf
-rw-r----- 1 oracle oinstall 20979712 Aug 7 16:56 temp01.dbf
-rw-r----- 1 oracle oinstall 2147491840 Aug 7 17:21 tlarge01.dbf
-rw-r----- 1 oracle oinstall 5251072 Aug 7 17:21 users01.dbf
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.73MB
Metadata: total=8.00MB, used=0.00
Now, lets start. First need to open DB2 read only and create description XML file from it:
SQL> exec dbms_pdb.describe(pdb_descr_file=>'/u01/db2_pdb.xml');
PL/SQL procedure successfully completed.
And create a BTRFS snapshot under directory /u01/db3
[root@cdbtest u01]# btrfs subvolume snapshot db2 db3
Create a snapshot of 'db2' in './db3'
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.75MB
Metadata: total=8.00MB, used=0.00
[root@cdbtest u01]# ls -l
total 8
drwxr-xr-x 1 oracle oinstall 50 Aug 4 11:55 app
drwxr-xr-x 1 oracle oinstall 50 Aug 4 11:55 app_bak
drwxrwxr-x 1 oracle oinstall 60 Aug 5 15:16 data
drwxrwxr-x 1 oracle oinstall 60 Aug 5 15:16 data_bak
drwxr-xr-x 1 oracle oinstall 136 Aug 7 17:53 db2
drwxr-xr-x 1 oracle oinstall 112 Aug 7 15:47 db2_bak
-rw-r--r-- 1 oracle oinstall 6370 Aug 7 17:53 db2_pdb.xml
drwxr-xr-x 1 oracle oinstall 136 Aug 7 17:53 db3
[root@cdbtest u01]# ls -l db3
total 3198048
-rw-r----- 1 oracle oinstall 723525632 Aug 7 17:49 sysaux01.dbf
-rw-r----- 1 oracle oinstall 293609472 Aug 7 17:49 system01.dbf
-rw-r----- 1 oracle oinstall 104865792 Aug 7 17:49 tdata01.dbf
-rw-r----- 1 oracle oinstall 20979712 Aug 7 16:56 temp01.dbf
-rw-r----- 1 oracle oinstall 2147491840 Aug 7 17:49 tlarge01.dbf
-rw-r----- 1 oracle oinstall 5251072 Aug 7 17:49 users01.dbf
I have a snapshot now and no additional space used! 🙂 Lets try to connect it as a new PDB.
SQL> alter session set container=cdb$root;
Session altered.
SQL> create pluggable database db3 using '/u01/db2_pdb.xml' source_file_name_convert=('/u01/db2/','/u01/db3/') nocopy tempfile reuse;
create pluggable database db3 using '/u01/db2_pdb.xml' source_file_name_convert=('/u01/db2/','/u01/db3/') nocopy tempfile reuse
*
ERROR at line 1:
ORA-65122: Pluggable database GUID conflicts with the GUID of an existing container.
Simple plug in does not work, since the database already has a PDB with the same GUID. Create pluggable database command has a AS CLONE clause to generate new GUID and DBID values.
SQL> create pluggable database db3 AS CLONE using '/u01/db2_pdb.xml' source_file_name_convert=('/u01/db2/','/u01/db3/') nocopy tempfile reuse;
Pluggable database created.
SQL> select name, open_mode, guid from v$pdbs;
NAME OPEN_MODE GUID
------------------------------ ---------- --------------------------------
PDB$SEED READ ONLY FFCDD773320B2767E0432700FA0A06D0
DB1 MOUNTED FFCC6C3F473E124BE0432700FA0A429E
DB2 READ ONLY 000A7D190B3C15A1E0532700FA0A3C10
DB3 MOUNTED 000C54AE44352331E0532700FA0AFA69
SQL> alter session set container=db3;
Session altered.
SQL> select name from v$datafile;
NAME
----------------------------------------------------
/u01/data/CDB2/datafile/o1_mf_undotbs1_9xyyw5dc_.dbf
/u01/db3/system01.dbf
/u01/db3/sysaux01.dbf
/u01/db3/users01.dbf
/u01/db3/tdata01.dbf
/u01/db3/tlarge01.dbf
6 rows selected.
Looks good from the DB side (ignore the UNDO datafile, this is CDB undo datafile, since PDB does not have its own). How does the storage usage look?
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.87MB
Metadata: total=8.00MB, used=0.00
Still 21.23GB. Lets open both DB2 and DB3 and see what happens.
SQL> alter session set container=cdb$root;
Session altered.
SQL> alter pluggable database db2 close;
Pluggable database altered.
SQL> alter pluggable database db2 open;
Pluggable database altered.
SQL> alter pluggable database db3 open;
Pluggable database altered.
SQL> select name, open_mode, guid from v$pdbs;
NAME OPEN_MODE GUID
------------------------------ ---------- --------------------------------
PDB$SEED READ ONLY FFCDD773320B2767E0432700FA0A06D0
DB1 MOUNTED FFCC6C3F473E124BE0432700FA0A429E
DB2 READ WRITE 000A7D190B3C15A1E0532700FA0A3C10
DB3 READ WRITE 000C54AE44352331E0532700FA0AFA69
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.90MB
Metadata: total=8.00MB, used=0.00
Still no additional storage used by the snapshot. I’ll modify some data in DB3 tables, then it must consume additional space.
SQL> alter session set container=db3;
Session altered.
SQL> select count(*) from t1.large;
COUNT(*)
----------
1897686
SQL> desc t1.large
Name Null? Type
----------------------------------------------------- -------- ------------------------------------
OWNER NOT NULL VARCHAR2(128)
OBJECT_NAME NOT NULL VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)
OBJECT_ID NOT NULL NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(23)
CREATED NOT NULL DATE
LAST_DDL_TIME NOT NULL DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NOT NULL NUMBER
EDITION_NAME VARCHAR2(128)
SHARING VARCHAR2(13)
EDITIONABLE VARCHAR2(1)
ORACLE_MAINTAINED VARCHAR2(1)
SQL> update t1.large set owner = substr(owner, 1, 2);
1897686 rows updated.
SQL> commit;
Commit complete.
SQL> alter system checkpoint;
System altered.
SQL> select round(bytes/1024/1024) mb from dba_segments where owner='T1' and segment_name = 'LARGE';
MB
----------
256
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.78GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=130.33MB
Metadata: total=8.00MB, used=0.00
I tried to change every block on that table. Table is 256MB and 21.78-21.23=0.55GB was changed on the disk. Just for a test, lets try the update again (just to see how BTRFS snapshots work a little).
SQL> update t1.large set owner = 'AX';
1897686 rows updated.
SQL> commit;
Commit complete.
SQL> alter system checkpoint;
System altered.
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=22.00GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=132.34MB
Metadata: total=8.00MB, used=0.00
I should have modified the same blocks, but BTRFS used another 0.22GB of disk space. Need to look more into how BTRFS snapshots work. But here you have it, PDB snapshot cloning without restrictions 🙂
- Written by: ilmarkerm
- Category: Blog entry
- Published: July 25, 2014
I started to write this blog post on how 12c gave us new multitenant architecture (pluggable databases) and with that also a new possibility to do database upgrades. But during writing I discovered how catctl.pl can fail the upgrade if you have modified the SQL plus prompt with custom login.sql.
I’m starting with these already created databases:
SID | Home | Description |
---|---|---|
cdb1 | /u01/app/oracle/product/12.1.0.1/db | The initial container with version 12.1.0.1. It is hosting the pluggable database DB1 that contains my application data. |
cdb2 | /u01/app/oracle/product/12.1.0.2/db | The target container, already pre-created with version 12.1.0.2 |
All of them use Oracle Managed Files (OMF).
The goal of this blog post is to upgrade database DB1 from version 12.1.0.1 to 12.1.0.2 by unplugging it from CDB1 container and plugging it to CDB2 container that is created with version 12.1.0.2.
I have already created an empty container database CDB2 using new Oracle version 12.1.0.2. The upgrade itself is simply unplugging the DB1 database from the old container, plugging it into new container and running the database upgrade scripts on the container database.
sys@cdb1(cdbtest.aws.ee.dom)> alter pluggable database db1 close immediate;
Pluggable database altered.
Elapsed: 00:00:02.05
sys@cdb1(cdbtest.aws.ee.dom)> alter pluggable database db1 unplug into '/u01/data/db1_upgrade.xml';
Pluggable database altered.
Elapsed: 00:00:01.41
[oracle@cdbtest data]$ . oraenv
ORACLE_SID = [cdb1] ? cdb2
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@cdbtest data]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/db
sys@cdb1(cdbtest.aws.ee.dom)> CREATE PLUGGABLE DATABASE db1 USING '/u01/data/db1_upgrade.xml' nocopy tempfile reuse;
Pluggable database created.
Elapsed: 00:00:10.16
sys@cdb1(cdbtest.aws.ee.dom)> alter pluggable database db1 open upgrade;
Warning: PDB altered with errors.
[oracle@cdbtest data]$ cd $ORACLE_HOME/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql
... cut
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]
Time: 2s For PDB(s)
Grand Total Time: 2s
LOG FILES: (catupgrd*.log)
Grand Total Upgrade Time: [0d:0h:0m:2s]
Something is wrong, 2 seconds it too fast. Logfile catupgrd0.log reveals one weird error:
sys@cdb2(cdbtest.aws.ee.dom)> sys@cdb2(cdbtest.aws.ee.dom)> ALTER SESSION SET CONTAINER = "0";
ERROR:
ORA-65011: Pluggable database 0 does not exist.
sys@cdb2(cdbtest.aws.ee.dom)> INSERT INTO sys.registry$error VALUES ('SYS', SYSDATE, 'catctl.pl', 'CATPROC', 'Invalid Upgrade on [0] Check catupgrd*.log', 'Invalid Upgrade');
1 row created.
Now that is a weird error, I specified DB1 as PDB name. Lets try again, first with serial execution. First clean up the failed upgrade from last time:
$ sqlplus / as sysdba
SQL> DELETE FROM sys.registry$error;
1 row deleted.
SQL> commit;
Commit complete.
And run catctl.pl again, with -S flag to run in serial.
[oracle@cdbtest ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' -S catupgrd.sql
... cut
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]
PDB Serial Inclusion:[DB1] Exclusion:[0]
Running File In Serial Order FileName is catupgrd.sql
...
So, serial execution works. Lets try one more thing… “Bug 17810688 PDB upgrade error running catcon.pl or catctl.pl” this bug is that PDB upgrade fails when sqlplus is not in PATH, so maybe this is some kind of parsing error of sqlplus output that only shows itself, when PDB-s are upgraded? As you saw here, I’m using custom SQLPlus prompt 🙂 So lets remove my custom login.sql and try running upgrade again, in parallel.
[oracle@cdbtest admin]$ echo $SQLPATH
/home/oracle/dbscripts/oracle
[oracle@cdbtest admin]$ mv $SQLPATH/login.sql $SQLPATH/login.sql.bak
[oracle@cdbtest admin]$ pwd
/u01/app/oracle/product/12.1.0.2/db/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql
... cut
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]
------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[DB1] Exclusion:[]
Serial Phase #: 0 Files: 1 Time: 16s DB1
Serial Phase #: 1 Files: 5
So upgrade started to run this time. I’ll cancel it and try to debug further. Could this be because of the custom prompt? I’ll try again and only comment out this time the line that sets the custom prompt (set sqlprompt):
[oracle@cdbtest admin]$ mv $SQLPATH/login.sql.bak $SQLPATH/login.sql
[oracle@cdbtest admin]$ vi $SQLPATH/login.sql
define _editor=vi
set serveroutput on size 1000000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user)||'@'||instance_name||'('||host_name||')' global_name from v$instance;
--set sqlprompt '&gname> '
set termout on
column db_unique_name format a20
column database_role format a16
column db_version format a10
column db_status format a8
column db_compatible format a10
column db_version heading VERSION
column db_status heading STATUS
column db_compatible heading COMPATIBLE
select d.db_unique_name, d.database_role, v.version db_version, p.value db_compatible, v.status db_status
from v$database d, v$instance v, v$parameter p where p.name = 'compatible';
[oracle@cdbtest admin]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 5 15:24:41 2014
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
DB_UNIQUE_NAME DATABASE_ROLE VERSION COMPATIBLE STATUS
-------------------- ---------------- ---------- ---------- --------
cdb2 PRIMARY 12.1.0.2.0 12.1.0.2.0 OPEN
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@cdbtest admin]$ pwd
/u01/app/oracle/product/12.1.0.2/db/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql
... cut
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]
Time: 2s For PDB(s)
Grand Total Time: 2s
LOG FILES: (catupgrd*.log)
Grand Total Upgrade Time: [0d:0h:0m:2s]
Still doesn’t work. Maybe it doesn’t like database information output, I’ll put back the custom prompt and remove the DB information.
[oracle@cdbtest admin]$ cat $SQLPATH/login.sql
define _editor=vi
set serveroutput on size 1000000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user)||'@'||instance_name||'('||host_name||')' global_name from v$instance;
set sqlprompt '&gname> '
set termout on
column db_unique_name format a20
column database_role format a16
column db_version format a10
column db_status format a8
column db_compatible format a10
column db_version heading VERSION
column db_status heading STATUS
column db_compatible heading COMPATIBLE
--select d.db_unique_name, d.database_role, v.version db_version, p.value db_compatible, v.status db_status
-- from v$database d, v$instance v, v$parameter p where p.name = 'compatible';
[oracle@cdbtest admin]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 5 15:28:42 2014
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
sys@cdb2(cdbtest.aws.ee.dom)> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql
... cut
[CONTAINER NAMES]
CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]
------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[DB1] Exclusion:[]
Serial Phase #: 0 Files: 1 Time: 19s DB1
Serial Phase #: 1 Files: 5
... cut
Grand Total Time: 1404s
LOG FILES: (catupgrd*.log)
Grand Total Upgrade Time: [0d:0h:23m:24s]
And it starts upgrading! So if you have custom login.sql for SQLPlus keep an eye for this during upgrades. This is rather strange bug, because I already used catctl.pl with the same login.sql setup to upgrade a non-CDB database to 12.1.0.2.
But back to the DB upgrade… Now the pluggable database upgrade is finished and we can open it:
sys@cdb2(cdbtest.aws.ee.dom)> select name, open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
DB1 MOUNTED
sys@cdb2(cdbtest.aws.ee.dom)> alter pluggable database db1 open;
Pluggable database altered.
NB! Need to take special care with physical standby databases when going for this upgrade path!
Fre MySQL seminar on 27. august 2014 @ 13:00. Announcement by Oracle User Group Estonia:
Developing modern applications using MySQL.
In this seminar series learn how to best use MySQL for your existing and new development requirements with leading MySQL expert and Oracle Ace Director Ronald Bradford.
These presentations provide a detailed review of the essential lifecycle components for developing a successful software application and offer a checklist for your company to review the design, development, deployment and support of your business applications with MySQL.
The presentations include:
* Effective MySQL Architecture and Design Practices
* Effective Software Development with MySQL
* Effective Web Site Operations
* Upcoming MySQL features for modern applicationsDetailed description about the topics: read here.
More information about Ronald Bradford:
http://ronaldbradford.com/
https://en.wikipedia.org/wiki/Ronald_Bradford
https://apex.oracle.com/pls/apex/f?p=19297:4:::NO:4:P4_ID:1820To attend this event, PLEASE REGISTER!
This event is organized by Oracle User Group Estonia in cooperation with Finnish, Swedish and Latvian user groups.
The event in Tallinn is sponsored by TransferWise.
If you require more information about this event, please contact ouge@ouge.eu