- Written by: ilmarkerm
- Category: Blog entry
- Published: October 29, 2014
Got an interesting question today: There are tables TABLE1, TABLE2 and a junction table that joins them called JUNCTABLE. Need the following output as XML
<table1>
<table2></table2>
<table2></table2>
</table1>
<table1>
<table2></table2>
<table2></table2>
</table1>
I know the output could be aggregated using XMLAGG, but I have never looked into how to format the tag names and attributed in the output like requested.
Data in my very simplified sample tables:
SQL> select * from table1;
ID
----------
1
2
3
SQL> select * from table2;
ID
----------
11
12
13
14
SQL> select * from junctable;
T1_ID T2_ID
---------- ----------
1 11
1 12
2 12
2 13
3 14
Expected output:
<table1>
<table2></table2>
<table2></table2>
</table1>
<table1>
<table2></table2>
<table2></table2>
</table1>
<table1>
<table2></table2>
</table1>
Now the query itself:
SELECT XMLAGG (xmlpieces)
FROM ( SELECT XMLELEMENT ("table1", xmlattributes (t1.id AS "id"),
XMLAGG (XMLELEMENT ("table2", xmlattributes (t2.id AS "id")))
) xmlpieces
FROM table1 t1
JOIN junctable j ON j.t1_id = t1.id
JOIN table2 t2 ON t2.id = j.t2_id
GROUP BY t1.id);
And the result:
<table1>
<table2></table2>
<table2></table2>
</table1>
<table1>
<table2></table2>
<table2></table2>
</table1>
<table1>
<table2></table2>
</table1>
If you are in Estonia, then join our Oracle User Group Estonia Meetup #3. I’ll talk there about my latest trip to Oracle Open World, what I heard there and also I’ll present how to use CLONEDB feature.
- Written by: ilmarkerm
- Category: Blog entry
- Published: October 27, 2014
This post is for those who google the errors and maybe it will help to save some time. I’ve been playing around with clonedb for some time now and every time I hit these errors it takes me some time to figure out what actually the problem is 🙂 The error messages are not really helpful at all. Maybe I’ll remember it now.
When creating a new clone database using clonedb then after creating the new controlfile you need to execute dbms_dnfs.clonedb_renamefile(sourcefile, deltafile) for each of the data files in the image copy (sourcefile) to create a file that will hold the changes done to this datafile (deltafile). For example if one of the data files in image copy is named /nfs/source/user_data.dbf and I want to store changes done to this file as /u02/db/delta/user_data.dbf then need to execute:
SQL> exec dbms_dnfs.clonedb_renamefile('/nfs/source/user_data.dbf', '/u02/db/delta/user_data.dbf');
Sometimes when I run it I get strange errors:
SQL> exec dbms_dnfs.clonedb_renamefile('/nfs/source/user_data.dbf', '/u02/db/delta/user_data.dbf');
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01141: error renaming data file 6 - new file '/u02/db/delta/user_data.dbf' not found
ORA-01110: data file 6: '/nfs/source/user_data.dbf'
ORA-17503: ksfdopn:1 Failed to open file /u02/db/delta/user_data.dbf
ORA-17515: Creation of clonedb failed using snapshot file /nfs/source/user_data.dbf
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 2
This happens when clonedb init.ora parameter is FALSE (default). So before running dbms_dnfs.clonedb_renamefile make sure clonedb is set to true. If it is not, then change the parameter and bounce the database.
SQL> show parameter clonedb
NAME TYPE VALUE
------------------------------------ ----------- -------
clonedb boolean FALSE
SQL> alter system set clonedb=true scope=spfile;
SQL> startup mount force
PS. Starting from 12.1.0.2 the deltafile does not need to be stored in NFS server (and accessed using dNFS) and can be a local file instead, but the problem remains and the error comes from 12.1.0.2.
- Written by: ilmarkerm
- Category: Blog entry
- Published: August 29, 2014
12c gave us two new features among others: DBMS_UTILITY.EXPAND_SQL_TEXT and temporal validity. DBMS_UTILITY.EXPAND_SQL_TEXT is advertised as a means to expand SQL text that is using views, but it is also very useful to see how Oracle internally rewrites some SQL queries. Here I’m going to use it to see how temporal validity queries are actually executed. All tests are done using 12.1.0.2 Enterprise Edition.
First I’ll create a table for testing with two periods – VALID and ACTIVE. VALID is using DATE datatype and ACTIVE is using hidden TIMESTAMP columns.
CREATE TABLE temporal
(
id NUMBER PRIMARY KEY,
t VARCHAR2 (100),
valid_start DATE,
valid_end DATE,
PERIOD FOR valid (valid_start, valid_end)
);
ALTER TABLE temporal
ADD period FOR active;
SQL> select column_name, data_type, hidden_column from user_tab_cols where table_name = 'TEMPORAL' order by column_id;
COLUMN_NAME DATA_TYPE HID
-------------------- ------------------------------ ---
ID NUMBER NO
T VARCHAR2 NO
VALID_START DATE NO
VALID_END DATE NO
ACTIVE NUMBER YES
ACTIVE_START TIMESTAMP(6) WITH TIME ZONE YES
ACTIVE_END TIMESTAMP(6) WITH TIME ZONE YES
VALID NUMBER YES
8 rows selected.
First syntax that can be used to query temporal data is select id, t from temporal AS OF PERIOD FOR valid TIMESTAMP. Lets use DBMS_UTILITY.EXPAND_SQL_TEXT to look how Oracle actually executes this query (result is formatted for better readability using Toad).
declare
c clob;
begin
DBMS_UTILITY.EXPAND_SQL_TEXT(q'[select id, t from temporal as of period for valid DATE'2013-02-01']', c);
dbms_output.put_line(c);
end;
/
SELECT "A1"."ID" "ID", "A1"."T" "T"
FROM (SELECT "A2"."ID" "ID",
"A2"."T" "T",
"A2"."VALID_START" "VALID_START",
"A2"."VALID_END" "VALID_END"
FROM "DBAUSER"."TEMPORAL" "A2"
WHERE ("A2"."VALID_START" IS NULL OR "A2"."VALID_START" <= TO_DATE (' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
AND ("A2"."VALID_END" IS NULL OR "A2"."VALID_END" > TO_DATE (' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))) "A1"
No surprises here, Oracle just replaced AS OF PERIOD clause with corresponding WHERE predicates. Another possibility to set the validity period is using DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME. Then select id, t from temporal is turned to:
SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('CURRENT');
declare
c clob;
begin
DBMS_UTILITY.EXPAND_SQL_TEXT(q'[select id, t from temporal]', c);
dbms_output.put_line(c);
end;
/
SELECT "A1"."ID" "ID", "A1"."T" "T"
FROM (SELECT "A2"."ID" "ID",
"A2"."T" "T",
"A2"."VALID_START" "VALID_START",
"A2"."VALID_END" "VALID_END"
FROM "DBAUSER"."TEMPORAL" "A2"
WHERE ("A2"."VALID_START" IS NULL OR "A2"."VALID_START" <= SYSTIMESTAMP (6))
AND ("A2"."VALID_END" IS NULL OR "A2"."VALID_END" > SYSTIMESTAMP (6))
AND ("A2"."ACTIVE_START" IS NULL OR "A2"."ACTIVE_START" <= SYSTIMESTAMP (6))
AND ("A2"."ACTIVE_END" IS NULL OR "A2"."ACTIVE_END" > SYSTIMESTAMP (6))) "A1"
Two interesting things to note there. First DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME does not currently have a possibility to name the period, so the restriction is applied to ALL periods defined on the table. Another thing, columns valid_start and valid_end were created as DATE, but values are set as TIMESTAMP, making index use impossible (DATE is automatically converted to TIMESTAMP, not the other way around). Not really a problem, just a thing to take a note of, since the default hidden period columns are created as TIMESTAMP, not DATE.
- Written by: ilmarkerm
- Category: Blog entry
- Published: August 18, 2014
Before 11g if you added a new column to a table then Oracle had to lock and physically update all rows in the table. Possibly a painful process 🙂 11g helped with that a little, giving metadata only default values for NOT NULL columns, so the change was done only in data dictionary, not in the table data.
SQL> create table default_values (
2 id number primary key
3 );
Table created.
SQL> insert into default_values values (1);
1 row created.
SQL> alter table default_values add def_w_not_null varchar2(20) default 'THIS IS DEFAULT' not null;
Table altered.
SQL> select * from default_values;
ID DEF_W_NOT_NULL
---------- --------------------
1 THIS IS DEFAULT
SQL> select * from table(dbms_xplan.display_cursor(null,null,'projection'));
...
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "DEFAULT_VALUES"."ID"[NUMBER,22],
NVL("DEFAULT_VALUES"."DEF_W_NOT_NULL",'THIS IS DEFAULT')[20]
...
So internally Oracle just rewrites the columns with the following expression: NVL(“DEFAULT_VALUES”.”DEF_W_NOT_NULL”,’THIS IS DEFAULT’). Makes sense 🙂
12c now adds the possibility to add metadata only default value for NULL columns also. Lets see what happens:
SQL> alter table default_values add def_w_null varchar2(20) default 'THIS IS DEFAULT2';
Table altered.
SQL> select id, def_w_null from default_values;
ID DEF_W_NULL
---------- --------------------
1 THIS IS DEFAULT2
SQL> select * from table(dbms_xplan.display_cursor(null,null,'projection'));
...
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "ID"[NUMBER,22], DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00003$",0)),NULL
,NVL("DEF_W_NULL",'THIS IS DEFAULT2'),'0',NVL("DEF_W_NULL",'THIS IS
DEFAULT2'),'1',"DEF_W_NULL")[20]
We have much more complex expression now: DECODE(TO_CHAR(SYS_OP_VECBIT(“SYS_NC00003$”,0)),NULL,NVL(“DEF_W_NULL”,’THIS IS DEFAULT2′),’0′,NVL(“DEF_W_NULL”,’THIS IS DEFAULT2′),’1′,”DEF_W_NULL”). And also a new hidden column SYS_NC00003$. What is in there?
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
SQL> insert into default_values (id, DEF_W_NULL) values (2, null);
1 row created.
SQL> commit;
Commit complete.
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
2 is not null 01
SQL> insert into default_values (id, DEF_W_NULL) values (3, 'some value');
1 row created.
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
2 is not null 01
3 some value is not null 01
SQL> insert into default_values (id) values (4);
1 row created.
SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;
ID DEF_W_NULL IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
1 THIS IS DEFAULT2 is null
2 is not null 01
3 some value is not null 01
4 THIS IS DEFAULT2 is not null 01
This new column seems to set a bit, whether the value in the column “should be trusted” or replaced with a default. So, all the rows that existed before the new column was added have NULL is this new column, but all the newly modified rows set a specific bit to 1. Looking at the original expression, then there is also a possibility that this bit is set to 0, maybe this newly added column can contain other “row status bits” also?
DECODE (TO_CHAR (sys_op_vecbit ("SYS_NC00003$", 0)),
NULL, NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
'0', NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
'1', "DEF_W_NULL"
)