Ilmar Kerm

Oracle, databases, Linux and maybe more

Most (all?) of the Oracle ZFS Storage Appliance (ZS3-4 in my case) management functions are also exposed through REST API so it is possible to script and automate the storage management tasks. No proprietary clients or GUI needed, just need to write a simple script that can send some REST commands over HTTPS to the appliance and parse the JSON output.

I had a task of automating test database refresh from production and part of that workflow is also cloning and attaching LUNs from ZFSSA. I did not find any sample code to access ZFSSA online, so here am I publishing my code. It is a Python script that makes a use of a few ZFSSA REST API calls and publishes them as Fabric tasks.

This script exposes four tasks:

  • clone – Creates a new clone from replication
  • attach_stage1 – Attaches all LUNs in a project to a single target server
  • attach_stage2 – Attaches all LUNs in a project to multiple target servers
  • drop – Deletes a project

ZFSSA REST API works fine if each LUN has only a single initiator group, but when assigning LUN to multiple initiator groups (to be used in a RAC setup), I observed the following errors (they must be ZFSSA REST API bugs):

  • When modifying LUN to be part of multiple initiator groups (PUT request to /api/storage/v1/pools/poolname/projects/projectname/luns/lunname), then the API responds with (argument numbers may be different): {“fault”: {“message”: “invalid input argument (assignednumber: invalid literal for int() with base 10: ‘0,0,0’)”, “code”: 400, “name”: “ERR_INVALID_ARG”}}. Nevertheless, ZFS GUI confirms that the operation was completed successfully. This script just ignores the returned error message, when assigning LUN to multiple initiator groups.
  • If any LUN in the project has multiple initiator groups defined, then API call to get the list of LUNs (GET request to /api/storage/v1/pools/poolname/projects/projectname/luns) in a project returns an error immediately: {“luns”:[{“fault”: {“message”: “internal error”, “code”: 500, “name”: “ERR_INTERNAL”}}]}. This script cannot reattach the LUNs in this case, since getting the list of LUNs fails.

Yesterday I attended #cluboracle event in London (I highly recommend this event! … but I don’t recommend Thameslink train service 🙂 ). In this event one interesting discussion popped up regarding 12c. I’ve already written about this feature, so in this post I’ll clarify some points as I understand them and test out a few more things.

Here is a quote from my last post summing up my findings so far:

This new column seems to set a bit, whether the value in the column “should be trusted” or replaced with a default. So, all the rows that existed before the new column was added have NULL is this new column, but all the newly modified rows set a specific bit to 1. Looking at the original expression, then there is also a possibility that this bit is set to 0, maybe this newly added column can contain other “row status bits” also?

DECODE (TO_CHAR (sys_op_vecbit ("SYS_NC00003$", 0)),
    NULL, NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
    '0', NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
    '1', "DEF_W_NULL"
)

Therefore… Only the rows that existed previously don’t have the default value written to the data blocks, all rows added after adding the column have the DEFAULT value written to the data block directly.. What happens if you update the old row?

SQL> create table default_values (
   id number primary key
   );

Table created.

SQL>  insert into default_values values (1);

1 row created.

SQL>  insert into default_values values (2);

1 row created.

SQL>  insert into default_values values (3);

1 row created.

SQL>  insert into default_values values (4);

1 row created.

SQL> alter table default_values add def_w_null varchar2(20) default 'THIS IS DEFAULT2';

Table altered.

SQL> insert into default_values (id) values (5);

1 row created.

SQL> commit;

Commit complete.

Here is the projection information for column DEF_W_NULL.

DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00002$",0)),
  NULL,NVL("DEF_W_NULL",'THIS IS DEFAULT2'),
  '0',NVL("DEF_W_NULL",'THIS IS DEFAULT2'),
  '1',"DEF_W_NULL")

The hidden column containing the bitmap in this test is SYS_NC00002$. Here is the current contents of this column and the interesting bit.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00002$,'is not null','is null') is_hidden_col_null, SYS_NC00002$, TO_CHAR (sys_op_vecbit(SYS_NC00002$,0)) bitval  FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00002$  BITVAL
---------- -------------------- ----------- ------------- --------
         1 THIS IS DEFAULT2     is null
         2 THIS IS DEFAULT2     is null
         3 THIS IS DEFAULT2     is null
         4 THIS IS DEFAULT2     is null
         5 THIS IS DEFAULT2     is not null 01            1

All as expected so far. ID 1-4 were added before adding the new column, so SYS_NC00002$ IS NULL for them and the NULL value in file is replaced with metadata default value. ID=5 was added after, so its BITVAL=1 and from the projection information we see, that this column is displayed how it is written to the file. What happens if I update ID=1 row but don’t touch DEF_W_NULL column?

SQL> update default_values set id=10 where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00002$,'is not null','is null') is_hidden_col_null, SYS_NC00002$, TO_CHAR (sys_op_vecbit(SYS_NC00002$,0))bitval  FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00002$  BITVAL
  ---------- -------------------- ----------- ------------- --------
        10 THIS IS DEFAULT2     is null
         2 THIS IS DEFAULT2     is null
         3 THIS IS DEFAULT2     is null
         4 THIS IS DEFAULT2     is null
         5 THIS IS DEFAULT2     is not null 01            1

Nothing happened with DEF_W_NULL column, Oracle only updated ID column. What if I update DEF_W_NULL column first with keyword DEFAULT and then with the same value?

SQL> update default_values set def_w_null=default, id=11 where id=2;

1 row updated.

SQL> update default_values set def_w_null='THIS IS DEFAULT2', id=12 where id=3;

1 row updated.

SQL> commit;

Commit complete.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00002$,'is not null','is null') is_hidden_col_null, SYS_NC00002$, TO_CHAR (sys_op_vecbit(SYS_NC00002$,0))bitval  FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00002$  BITVAL
  ---------- -------------------- ----------- ------------- --------
        10 THIS IS DEFAULT2     is null
        11 THIS IS DEFAULT2     is not null 01            1
        12 THIS IS DEFAULT2     is not null 01            1
         4 THIS IS DEFAULT2     is null
         5 THIS IS DEFAULT2     is not null 01            1

Oracle has updated the physically stored value, so keep an eye on row migration possibilities!

What happens if I now change the default value in table metadata? Looking at the projection information it must update the existing rows to store the old default value. Lets verify.

SQL> alter table default_values modify DEF_W_NULL varchar2(20) default 'NEW DEFAULT';

Table altered.

SQL> insert into default_values (id) values (6);

1 row created.

SQL> commit;

Commit complete.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00002$,'is not null','is null') is_hidden_col_null, SYS_NC00002$, TO_CHAR (sys_op_vecbit(SYS_NC00002$,0))bitval  FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00002$  BITVAL
  ---------- -------------------- ----------- ------------- --------
        10 THIS IS DEFAULT2     is null
        11 THIS IS DEFAULT2     is not null 01            1
        12 THIS IS DEFAULT2     is not null 01            1
         4 THIS IS DEFAULT2     is null
         5 THIS IS DEFAULT2     is not null 01            1
         6 NEW DEFAULT          is not null 01            1

Whoa… really interesting. I didn’t do anything! And I think it makes sense now also, it only has to store the initial DEFAULT value, when the column was not present in the table, rows added later already have the correct value written to the files. It must also mean, that Oracle stores two different default values in table metadata – the currently active default value and the initial value when column was added.

The last small thing I want to test, what if I have multiple columns with NULL default values in the same table, does it add one hidden column for each of them?

SQL> alter table default_values add def_w_null2 varchar2(20) default 'THIS IS DEFAULT 3';

Table altered.

SQL> insert into default_values (id) values (6);

1 row created.

SQL> commit;

Commit complete.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00002$,'is not null','is null') is_hidden_col_null, SYS_NC00002$, TO_CHAR (sys_op_vecbit(SYS_NC00002$,0))bitval1, TO_CHAR (sys_op_vecbit(SYS_NC00002$,1)) bitval2  FROM default_values where id in (6,7);

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00002$  BITVAL1   BITVAL2
---------- -------------------- ----------- ------------- --------- --------
         6 NEW DEFAULT          is not null 01            1         0
         7 NEW DEFAULT          is not null 03            1         1

The new column projection information:

DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00002$",1)),
  NULL,NVL("DEF_W_NULL2",'THIS IS DEFAULT 3'),
  '0',NVL("DEF_W_NULL2",'THIS IS DEFAULT 3'),
  '1',"DEF_W_NULL2")

I’m pleasantly surprised again! It did not add a new column, just used another bit in the same column.

Got an interesting question today: There are tables TABLE1, TABLE2 and a junction table that joins them called JUNCTABLE. Need the following output as XML

<table1>
  <table2></table2>
  <table2></table2>
</table1>
<table1>
  <table2></table2>
  <table2></table2>
</table1>

I know the output could be aggregated using XMLAGG, but I have never looked into how to format the tag names and attributed in the output like requested.

Data in my very simplified sample tables:

SQL> select * from table1;

        ID
----------
         1
         2
         3

SQL> select * from table2;

        ID
----------
        11
        12
        13
        14

SQL> select * from junctable;

     T1_ID      T2_ID
---------- ----------
         1         11
         1         12
         2         12
         2         13
         3         14

Expected output:

  <table1>
    <table2></table2>
    <table2></table2>
  </table1>
  <table1>
    <table2></table2>
    <table2></table2>
  </table1>
  <table1>
    <table2></table2>
  </table1>

Now the query itself:

SELECT XMLAGG (xmlpieces)
  FROM (  SELECT XMLELEMENT ("table1", xmlattributes (t1.id AS "id"), 
                   XMLAGG (XMLELEMENT ("table2", xmlattributes (t2.id AS "id")))
                 ) xmlpieces
            FROM table1 t1
                 JOIN junctable j ON j.t1_id = t1.id
                 JOIN table2 t2 ON t2.id = j.t2_id
        GROUP BY t1.id);

And the result:

  <table1>
    <table2></table2>
    <table2></table2>
  </table1>
  <table1>
    <table2></table2>
    <table2></table2>
  </table1>
  <table1>
    <table2></table2>
  </table1>

This post is for those who google the errors and maybe it will help to save some time. I’ve been playing around with clonedb for some time now and every time I hit these errors it takes me some time to figure out what actually the problem is 🙂 The error messages are not really helpful at all. Maybe I’ll remember it now.

When creating a new clone database using clonedb then after creating the new controlfile you need to execute dbms_dnfs.clonedb_renamefile(sourcefile, deltafile) for each of the data files in the image copy (sourcefile) to create a file that will hold the changes done to this datafile (deltafile). For example if one of the data files in image copy is named /nfs/source/user_data.dbf and I want to store changes done to this file as /u02/db/delta/user_data.dbf then need to execute:

SQL> exec dbms_dnfs.clonedb_renamefile('/nfs/source/user_data.dbf', '/u02/db/delta/user_data.dbf');

Sometimes when I run it I get strange errors:

SQL> exec dbms_dnfs.clonedb_renamefile('/nfs/source/user_data.dbf', '/u02/db/delta/user_data.dbf');

ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01141: error renaming data file 6 - new file '/u02/db/delta/user_data.dbf' not found
ORA-01110: data file 6: '/nfs/source/user_data.dbf'
ORA-17503: ksfdopn:1 Failed to open file /u02/db/delta/user_data.dbf
ORA-17515: Creation of clonedb failed using snapshot file /nfs/source/user_data.dbf
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 2

This happens when clonedb init.ora parameter is FALSE (default). So before running dbms_dnfs.clonedb_renamefile make sure clonedb is set to true. If it is not, then change the parameter and bounce the database.

SQL> show parameter clonedb

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------
clonedb                              boolean     FALSE

SQL> alter system set clonedb=true scope=spfile;
SQL> startup mount force

PS. Starting from 12.1.0.2 the deltafile does not need to be stored in NFS server (and accessed using dNFS) and can be a local file instead, but the problem remains and the error comes from 12.1.0.2.

12c gave us two new features among others: DBMS_UTILITY.EXPAND_SQL_TEXT and temporal validity. DBMS_UTILITY.EXPAND_SQL_TEXT is advertised as a means to expand SQL text that is using views, but it is also very useful to see how Oracle internally rewrites some SQL queries. Here I’m going to use it to see how temporal validity queries are actually executed. All tests are done using 12.1.0.2 Enterprise Edition.

First I’ll create a table for testing with two periods – VALID and ACTIVE. VALID is using DATE datatype and ACTIVE is using hidden TIMESTAMP columns.

CREATE TABLE temporal
(
  id            NUMBER PRIMARY KEY,
  t             VARCHAR2 (100),
  valid_start   DATE,
  valid_end     DATE,
  PERIOD FOR valid (valid_start, valid_end)
);

ALTER TABLE temporal
  ADD period FOR active;


SQL> select column_name, data_type, hidden_column from user_tab_cols where table_name = 'TEMPORAL' order by column_id;

COLUMN_NAME          DATA_TYPE                      HID
-------------------- ------------------------------ ---
ID                   NUMBER                         NO
T                    VARCHAR2                       NO
VALID_START          DATE                           NO
VALID_END            DATE                           NO
ACTIVE               NUMBER                         YES
ACTIVE_START         TIMESTAMP(6) WITH TIME ZONE    YES
ACTIVE_END           TIMESTAMP(6) WITH TIME ZONE    YES
VALID                NUMBER                         YES

8 rows selected.

First syntax that can be used to query temporal data is select id, t from temporal AS OF PERIOD FOR valid TIMESTAMP. Lets use DBMS_UTILITY.EXPAND_SQL_TEXT to look how Oracle actually executes this query (result is formatted for better readability using Toad).

declare
  c clob;
begin
  DBMS_UTILITY.EXPAND_SQL_TEXT(q'[select id, t from temporal as of period for valid DATE'2013-02-01']', c);
  dbms_output.put_line(c);
end;
/

SELECT "A1"."ID" "ID", "A1"."T" "T"
  FROM (SELECT "A2"."ID" "ID",
               "A2"."T" "T",
               "A2"."VALID_START" "VALID_START",
               "A2"."VALID_END" "VALID_END"
          FROM "DBAUSER"."TEMPORAL" "A2"
         WHERE     ("A2"."VALID_START" IS NULL OR "A2"."VALID_START" <= TO_DATE (' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
               AND ("A2"."VALID_END" IS NULL OR "A2"."VALID_END" > TO_DATE (' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))) "A1"

No surprises here, Oracle just replaced AS OF PERIOD clause with corresponding WHERE predicates. Another possibility to set the validity period is using DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME. Then select id, t from temporal is turned to:

SQL> exec DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME('CURRENT');

declare
  c clob;
begin
  DBMS_UTILITY.EXPAND_SQL_TEXT(q'[select id, t from temporal]', c);
  dbms_output.put_line(c);
end;
/

SELECT "A1"."ID" "ID", "A1"."T" "T"
  FROM (SELECT "A2"."ID" "ID",
               "A2"."T" "T",
               "A2"."VALID_START" "VALID_START",
               "A2"."VALID_END" "VALID_END"
          FROM "DBAUSER"."TEMPORAL" "A2"
         WHERE     ("A2"."VALID_START" IS NULL OR "A2"."VALID_START" <= SYSTIMESTAMP (6))
               AND ("A2"."VALID_END" IS NULL OR "A2"."VALID_END" > SYSTIMESTAMP (6))
               AND ("A2"."ACTIVE_START" IS NULL OR "A2"."ACTIVE_START" <= SYSTIMESTAMP (6))
               AND ("A2"."ACTIVE_END" IS NULL OR "A2"."ACTIVE_END" > SYSTIMESTAMP (6))) "A1"

Two interesting things to note there. First DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME does not currently have a possibility to name the period, so the restriction is applied to ALL periods defined on the table. Another thing, columns valid_start and valid_end were created as DATE, but values are set as TIMESTAMP, making index use impossible (DATE is automatically converted to TIMESTAMP, not the other way around). Not really a problem, just a thing to take a note of, since the default hidden period columns are created as TIMESTAMP, not DATE.