Archives: Installing Oracle ZFS Storage Appliance simulator for you virtual storage requirements
Oracle has released it ZFS Storage Appliance software simulator. It is sold as an Tier 2 storage hardware, but using the simulator you can get all its powerful storage management features on your Virtualbox. The simulator is free and it has no time limits nor restrictions (although one restriction – the simulator is not clustered).
Some use cases for the simulator:
* You can try out the new ZFS Storage Applicance software patches before applying them on the real physical storage box
* You want to test some Oracle database features that are only activated when using Oracle storage (HCC)
* You want to provide shared network storage fro your VM-s, with advanced storage capabilities, like snapshots, cloning, compression, deduplication, remote replication, encryption etc
* You want to evaluate ZFS Storage Appliance features before purchasing the real box or before using a specific feature in production
Setting up the simulator under virtualbox is very simple and quick. On the configuration below I make no effort to secure the system, since it is intended to be used only in my Virtualbox environment.
Requirements
Virtualbox 4.2.12 or later.
I’m using Virtualbox 5.
Download the software
Go to the ZFS Storage software page and click Try the simulator. First you have to register the download and then you can download ZIP file containing the software.
Unzip the file you downloaded, then you’ll get OracleZFSStorageVM directory with 18 files under it.
Import VM to Virtualbox
Open Virtualbox Manager, go to File > Import Appliance.
Browse to Oracle_ZFS_Storage.ovf from the unzipped software directory.
Click Continue.
On the next screen you can view the imported VM settings and then press Import.
Initial configuration
After importing you will have a new VM called Oracle_ZFS_Storage.
Open Settings for this VM and go to Network tab. Verify that it is connected to the correct virtual network. Mine is automatically imported under the only Host-only Adapter network I have and I’m going to keep that setting.
Now launch the VM.
It will take a 10+ seconds to boot up and then on the first boot it will ask you a few questions on basic network settings and root user password.
Supply the requested values:
Press Enter when done. After that it will show you a blank screen for 30+ seconds and after it is done the system is ready.
Now open the URL requested https://192.168.56.98:215/ in your browser to finish the initial setup.
Log in as root supplying the password you set previously.
Welcome screen appears. Click Start to go through a small initial setup wizard. Everything can also be changed later.
First screen, networking, just press Commit.
Second screen, DNS, just press Commit.
Next, NTP, just press Commit.
Next, Name Services… here you need to set up LD… just kidding, press Commit.
Now, Configure storage, here you can set up the storage pool.
Press the plus sign before Available pools and supply a pool name – pool1 in my example.
On Verify and allocate devices screen you can just press Commit
The next, Choose storage profile is the most interesting. In production it requires a lot of consideration, because every choice has very different availability, read performance and write performance implications. Here, Iรคll just choose Striped to get the maximum performance and no loss in available storage size. Obviously this would be a very bad choice for production system due to no availability or fault tolerance at all.
Press Commit. You are now back in Configure storage screen, here again press Commit.
On the final Registration & Support page there is no Commit button, but there is a button called Later, press it and then confirm it by pressing OK.
All done ๐
Set up iSCSI
If you want to share iSCSI block devices, then first need to create iSCSI Target.
Go to Configuration > SAN and click on ISCSI.
Click the plus sign before Targets.
If you don’t care how the IQN looks, just provide some name for Alias and press OK.
It will then automatically generate IQN for you, so in my example it is named: iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe
Set up SNMP
By default SNMP service is not enabled, but if you want to test Hybrid Columnar Compression it is required. You can get more details about it from this blog post.
Go to Configuration > Services, and click on the aroow button to expand the Services menu.
Click on SNMP from the left-side menu.
First click on the power button symbol under SNMP to enable the service and set the following values:
* Authorized network/mask: 0.0.0.0 / 0
* Appliance contact: your email address
* Trap destination: 127.0.0.1
Click Apply.
REST API
All storage configuration can also be done using REST API. You can create new LUNs, filesystems, change thir properties, snapshot, clone, drop… everything that you can do in GUI you can also do over REST API.
I use it a lot in production in database backup&restore scripts and also in providing production database clones for testing.
I have also blogged about it before: SAMPLE CODE: USING THE ORACLE ZFS STORAGE APPLIANCE REST API FROM PYTHON
I really hope I can soon publish my full Oracle database backup&restore script suite that also relies on ZFSSA sotrage features heavily.
ZFSSA REST API documentation is here
REST API is turned on by default and it is accessible over the same URL as the management GUI: https://192.168.56.98:215/
Create NFS filesystem
To create a new NFS filesystem go to Shares
Click the plus sign next to Filesystems.
Write filesystem name and you can also set the share UID, GID and permissions. For example if you are using it for oracle database and have installed the oracle-rdbms-preinstall rpm package under linux, then you could set User and Group values to 54321 to get the correct permissions on mount.
Click Apply.
The newly created filesystem appears on the list.
You can now mount the NFS filesystem on a target machine, in my example using the settings:
mount -t nfs 192.168.56.98:/export/oradata /mnt
If you hover over the filesystem entry, you notice a pencil icon on the left. If you click it you can change filesystem properties, restrictions and snapshots.
If the filesystem is going to be used for Oracle database data files (not for RMAN, UNDO, REDO, TEMP) , then one thing that you may want to change is Database record size to be equal to the tablespace block size 8K.
Click Apply.
Create iSCSI LUN
Go to Shares and click on LUNs. This way you can present block devices to servers, to be used for example as ASM disks.
Click plus sign next to LUNs.
Fill out the properties, again, if the LUN is to be used for Oracle tablespace data files, then you may want to set the Volume block size to tablespace block size (8K).
Click Apply.
The newly create LUN appears on the list.
Again, the pencil icon opens the LUN detailed properties page.
NB! LUNs can also be compressed and deduplicated!
Let’s try accessing this LUN from Linux. First connect to ZFS:
[root@localhost ~]# iscsiadm -m discovery -t sendtargets -p 192.168.56.98
192.168.56.98:3260,2 iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe
[root@localhost ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe, portal: 192.168.56.98,3260] (multiple)
Login to [iface: default, target: iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe, portal: 192.168.56.98,3260] successful.
Now I should have the new LUN mapped to linux:
[root@localhost proc]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: VBOX Model: CD-ROM Rev: 1.0
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: VBOX HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: SUN Model: Sun Storage 7000 Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
[root@localhost proc]# cat /proc/partitions
major minor #blocks name
11 0 57620 sr0
8 0 12582912 sda
8 1 204800 sda1
8 2 12377088 sda2
252 0 11325440 dm-0
252 1 1048576 dm-1
8 16 10485760 sdb
My new device is /dev/sdb. Lets confirm its SCSI ID.
[root@localhost proc]# /usr/lib/udev/scsi_id -g -u /dev/sdb
3600144f09ff1616800005662f2f40001
Matches perfectly with the ID in ZFS management interface GUID column (ignore the first digit).
Archives: Recatalog incrementally updated image copy in RMAN
For our backup strategy we are using incrementally updated image copies on most Oracle databases. This method can save a lot of time during restore operations, since you don’t need to restore full backup and then apply all the incremental backups, you can either restore the full backup directly (or skip restore operation and switch over to the image copy directly). At the same time, taking backups is as easy and fast as taking incremental backups (in Enterprise Edition block change tracking also helps here).
Today I wanted to change the naming scheme of the incrementally updated image copies, in our case the image copies are stored in NFS, not ASM. I expected it to be straign forward… Rename the files, crosscheck, delete expired and then catalog again (like with normal backupsets). After doing that I tried to update the incremental copy and this is what happened.
First this is my current setup, and my goal was to remove the double dbarep1_ from the beginning of the file name I added there myself wth the backup format string.
SQL> select file#, tag, incremental_level, name from v$datafile_copy where deleted='NO' order by 1;
FILE# TAG INCREMENTAL_LEVEL NAME
---------- -------------------- ----------------- -----------------------------------------------------------------------------------
1 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
2 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
3 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
4 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
5 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
7 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
6 rows selected.
Now rename the files to:
data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
After renaming I tried to catalog the files again and all looks good at first.
RMAN> crosscheck datafilecopy all;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=216 instance=dbarep11 device type=DISK
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2 RECID=4001 STAMP=897265906
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc RECID=4003 STAMP=897265908
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur RECID=4002 STAMP=897265906
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va RECID=4000 STAMP=897265905
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd RECID=3999 STAMP=897265901
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb RECID=4004 STAMP=897265910
Crosschecked 6 objects
RMAN> delete expired datafilecopy all;
released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=216 instance=dbarep11 device type=DISK
List of Datafile Copies
=======================
Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
990260 1 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
Tag: IMAGE_COPY_BACKUP
990262 2 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
Tag: IMAGE_COPY_BACKUP
990261 3 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
Tag: IMAGE_COPY_BACKUP
990259 4 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
Tag: IMAGE_COPY_BACKUP
990258 5 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
Tag: IMAGE_COPY_BACKUP
990263 7 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
Tag: IMAGE_COPY_BACKUP
Do you really want to delete the above objects (enter YES or NO)? yes
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2 RECID=4001 STAMP=897265906
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc RECID=4003 STAMP=897265908
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur RECID=4002 STAMP=897265906
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va RECID=4000 STAMP=897265905
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd RECID=3999 STAMP=897265901
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb RECID=4004 STAMP=897265910
Deleted 6 EXPIRED objects
RMAN> catalog start with '/nfs/backup/dbarep1/data_';
searching for all files that match the pattern /nfs/backup/dbarep1/data_
List of Files Unknown to the Database
=====================================
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
RMAN shows that the datafile copies are nicely registered, with a correct tag:
RMAN> list datafilecopy all;
List of Datafile Copies
=======================
Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
991836 1 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
Tag: IMAGE_COPY_BACKUP
991832 2 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
Tag: IMAGE_COPY_BACKUP
991834 3 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
Tag: IMAGE_COPY_BACKUP
991835 4 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
Tag: IMAGE_COPY_BACKUP
991833 5 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
Tag: IMAGE_COPY_BACKUP
991837 7 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
Tag: IMAGE_COPY_BACKUP
And then thinking all is good I tried to refresh that copy, this is what happened:
RMAN> backup incremental level 1 for recover of copy with tag 'image_copy_backup' database;
Starting backup at 01-DEC-15
using channel ORA_DISK_1
no parent backup or copy of datafile 7 found
no parent backup or copy of datafile 2 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=+DATA/dbarep1/datafile/sash.286.779298095
...
RMAN thinks that there is no image copy to update and tries to create a new full image copy! If you have >20TB databases it is an expensive price to pay ๐
Lets query the data dictionary directly to see more information about the datafilecopy that was registered:
SQL> select file#, tag, incremental_level, name from v$datafile_copy where deleted='NO' order by 1;
FILE# TAG INCREMENTAL_LEVEL NAME
---------- -------------------- ----------------- -----------------------------------------------------------------------------------
1 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
2 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
3 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
4 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
5 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
7 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
6 rows selected.
Incremental level is NULL! Catalog start with command did not register the datafiles as a base of incremental backup. CATALOG START WITH was a wrong command to use. To register datafilecopy properly for incremental updates, there is a separate catalog command: CATALOG DATAFILECOPY ‘filename’ LEVEL 0 TAG ‘tagname’;
First I remove the invalid registrations:
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb' uncatalog;
And then register the datafilecopy properly:
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd' level 0 tag 'IMAGE_COPY_BACKUP';
After that incremental update worked again.
Categories
- Blog entry (97)
- Event (5)