Installing Oracle ZFS Storage Appliance simulator for you virtual storage requirements
- Written by: ilmarkerm
- Category: Blog entry
- Published: December 5, 2015
Oracle has released it ZFS Storage Appliance software simulator. It is sold as an Tier 2 storage hardware, but using the simulator you can get all its powerful storage management features on your Virtualbox. The simulator is free and it has no time limits nor restrictions (although one restriction – the simulator is not clustered).
Some use cases for the simulator:
* You can try out the new ZFS Storage Applicance software patches before applying them on the real physical storage box
* You want to test some Oracle database features that are only activated when using Oracle storage (HCC)
* You want to provide shared network storage fro your VM-s, with advanced storage capabilities, like snapshots, cloning, compression, deduplication, remote replication, encryption etc
* You want to evaluate ZFS Storage Appliance features before purchasing the real box or before using a specific feature in production
Setting up the simulator under virtualbox is very simple and quick. On the configuration below I make no effort to secure the system, since it is intended to be used only in my Virtualbox environment.
Requirements
Virtualbox 4.2.12 or later.
I’m using Virtualbox 5.
Download the software
Go to the ZFS Storage software page and click Try the simulator. First you have to register the download and then you can download ZIP file containing the software.
Unzip the file you downloaded, then you’ll get OracleZFSStorageVM directory with 18 files under it.
Import VM to Virtualbox
Open Virtualbox Manager, go to File > Import Appliance.
Browse to Oracle_ZFS_Storage.ovf from the unzipped software directory.
Click Continue.
On the next screen you can view the imported VM settings and then press Import.
Initial configuration
After importing you will have a new VM called Oracle_ZFS_Storage.
Open Settings for this VM and go to Network tab. Verify that it is connected to the correct virtual network. Mine is automatically imported under the only Host-only Adapter network I have and I’m going to keep that setting.
Now launch the VM.
It will take a 10+ seconds to boot up and then on the first boot it will ask you a few questions on basic network settings and root user password.
Supply the requested values:
Press Enter when done. After that it will show you a blank screen for 30+ seconds and after it is done the system is ready.
Now open the URL requested https://192.168.56.98:215/ in your browser to finish the initial setup.
Log in as root supplying the password you set previously.
Welcome screen appears. Click Start to go through a small initial setup wizard. Everything can also be changed later.
First screen, networking, just press Commit.
Second screen, DNS, just press Commit.
Next, NTP, just press Commit.
Next, Name Services… here you need to set up LD… just kidding, press Commit.
Now, Configure storage, here you can set up the storage pool.
Press the plus sign before Available pools and supply a pool name – pool1 in my example.
On Verify and allocate devices screen you can just press Commit
The next, Choose storage profile is the most interesting. In production it requires a lot of consideration, because every choice has very different availability, read performance and write performance implications. Here, Iäll just choose Striped to get the maximum performance and no loss in available storage size. Obviously this would be a very bad choice for production system due to no availability or fault tolerance at all.
Press Commit. You are now back in Configure storage screen, here again press Commit.
On the final Registration & Support page there is no Commit button, but there is a button called Later, press it and then confirm it by pressing OK.
All done 🙂
Set up iSCSI
If you want to share iSCSI block devices, then first need to create iSCSI Target.
Go to Configuration > SAN and click on ISCSI.
Click the plus sign before Targets.
If you don’t care how the IQN looks, just provide some name for Alias and press OK.
It will then automatically generate IQN for you, so in my example it is named: iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe
Set up SNMP
By default SNMP service is not enabled, but if you want to test Hybrid Columnar Compression it is required. You can get more details about it from this blog post.
Go to Configuration > Services, and click on the aroow button to expand the Services menu.
Click on SNMP from the left-side menu.
First click on the power button symbol under SNMP to enable the service and set the following values:
* Authorized network/mask: 0.0.0.0 / 0
* Appliance contact: your email address
* Trap destination: 127.0.0.1
Click Apply.
REST API
All storage configuration can also be done using REST API. You can create new LUNs, filesystems, change thir properties, snapshot, clone, drop… everything that you can do in GUI you can also do over REST API.
I use it a lot in production in database backup&restore scripts and also in providing production database clones for testing.
I have also blogged about it before: SAMPLE CODE: USING THE ORACLE ZFS STORAGE APPLIANCE REST API FROM PYTHON
I really hope I can soon publish my full Oracle database backup&restore script suite that also relies on ZFSSA sotrage features heavily.
ZFSSA REST API documentation is here
REST API is turned on by default and it is accessible over the same URL as the management GUI: https://192.168.56.98:215/
Create NFS filesystem
To create a new NFS filesystem go to Shares
Click the plus sign next to Filesystems.
Write filesystem name and you can also set the share UID, GID and permissions. For example if you are using it for oracle database and have installed the oracle-rdbms-preinstall rpm package under linux, then you could set User and Group values to 54321 to get the correct permissions on mount.
Click Apply.
The newly created filesystem appears on the list.
You can now mount the NFS filesystem on a target machine, in my example using the settings:
mount -t nfs 192.168.56.98:/export/oradata /mnt
If you hover over the filesystem entry, you notice a pencil icon on the left. If you click it you can change filesystem properties, restrictions and snapshots.
If the filesystem is going to be used for Oracle database data files (not for RMAN, UNDO, REDO, TEMP) , then one thing that you may want to change is Database record size to be equal to the tablespace block size 8K.
Click Apply.
Create iSCSI LUN
Go to Shares and click on LUNs. This way you can present block devices to servers, to be used for example as ASM disks.
Click plus sign next to LUNs.
Fill out the properties, again, if the LUN is to be used for Oracle tablespace data files, then you may want to set the Volume block size to tablespace block size (8K).
Click Apply.
The newly create LUN appears on the list.
Again, the pencil icon opens the LUN detailed properties page.
NB! LUNs can also be compressed and deduplicated!
Let’s try accessing this LUN from Linux. First connect to ZFS:
[root@localhost ~]# iscsiadm -m discovery -t sendtargets -p 192.168.56.98
192.168.56.98:3260,2 iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe
[root@localhost ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe, portal: 192.168.56.98,3260] (multiple)
Login to [iface: default, target: iqn.1986-03.com.sun:02:455fe302-6504-6eaf-d478-9b3acf9f4afe, portal: 192.168.56.98,3260] successful.
Now I should have the new LUN mapped to linux:
[root@localhost proc]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: VBOX Model: CD-ROM Rev: 1.0
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: VBOX HARDDISK Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: SUN Model: Sun Storage 7000 Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 05
[root@localhost proc]# cat /proc/partitions
major minor #blocks name
11 0 57620 sr0
8 0 12582912 sda
8 1 204800 sda1
8 2 12377088 sda2
252 0 11325440 dm-0
252 1 1048576 dm-1
8 16 10485760 sdb
My new device is /dev/sdb. Lets confirm its SCSI ID.
[root@localhost proc]# /usr/lib/udev/scsi_id -g -u /dev/sdb
3600144f09ff1616800005662f2f40001
Matches perfectly with the ID in ZFS management interface GUID column (ignore the first digit).
Wao! thanks Ilmar, this is a great post. It really help me.
This is a wonderful document to start …. ZFS .. Thank you very much