- Written by: ilmarkerm
- Category: Blog entry
- Published: December 1, 2015
For our backup strategy we are using incrementally updated image copies on most Oracle databases. This method can save a lot of time during restore operations, since you don’t need to restore full backup and then apply all the incremental backups, you can either restore the full backup directly (or skip restore operation and switch over to the image copy directly). At the same time, taking backups is as easy and fast as taking incremental backups (in Enterprise Edition block change tracking also helps here).
Today I wanted to change the naming scheme of the incrementally updated image copies, in our case the image copies are stored in NFS, not ASM. I expected it to be straign forward… Rename the files, crosscheck, delete expired and then catalog again (like with normal backupsets). After doing that I tried to update the incremental copy and this is what happened.
First this is my current setup, and my goal was to remove the double dbarep1_ from the beginning of the file name I added there myself wth the backup format string.
SQL> select file#, tag, incremental_level, name from v$datafile_copy where deleted='NO' order by 1;
FILE# TAG INCREMENTAL_LEVEL NAME
---------- -------------------- ----------------- -----------------------------------------------------------------------------------
1 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
2 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
3 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
4 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
5 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
7 IMAGE_COPY_BACKUP 0 /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
6 rows selected.
Now rename the files to:
data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
After renaming I tried to catalog the files again and all looks good at first.
RMAN> crosscheck datafilecopy all;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=216 instance=dbarep11 device type=DISK
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2 RECID=4001 STAMP=897265906
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc RECID=4003 STAMP=897265908
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur RECID=4002 STAMP=897265906
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va RECID=4000 STAMP=897265905
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd RECID=3999 STAMP=897265901
validation failed for datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb RECID=4004 STAMP=897265910
Crosschecked 6 objects
RMAN> delete expired datafilecopy all;
released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=216 instance=dbarep11 device type=DISK
List of Datafile Copies
=======================
Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
990260 1 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
Tag: IMAGE_COPY_BACKUP
990262 2 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
Tag: IMAGE_COPY_BACKUP
990261 3 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
Tag: IMAGE_COPY_BACKUP
990259 4 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
Tag: IMAGE_COPY_BACKUP
990258 5 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
Tag: IMAGE_COPY_BACKUP
990263 7 X 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
Tag: IMAGE_COPY_BACKUP
Do you really want to delete the above objects (enter YES or NO)? yes
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2 RECID=4001 STAMP=897265906
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc RECID=4003 STAMP=897265908
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur RECID=4002 STAMP=897265906
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va RECID=4000 STAMP=897265905
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd RECID=3999 STAMP=897265901
deleted datafile copy
datafile copy file name=/nfs/backup/dbarep1/dbarep1_data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb RECID=4004 STAMP=897265910
Deleted 6 EXPIRED objects
RMAN> catalog start with '/nfs/backup/dbarep1/data_';
searching for all files that match the pattern /nfs/backup/dbarep1/data_
List of Files Unknown to the Database
=====================================
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
File Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
RMAN shows that the datafile copies are nicely registered, with a correct tag:
RMAN> list datafilecopy all;
List of Datafile Copies
=======================
Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
991836 1 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
Tag: IMAGE_COPY_BACKUP
991832 2 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
Tag: IMAGE_COPY_BACKUP
991834 3 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
Tag: IMAGE_COPY_BACKUP
991835 4 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
Tag: IMAGE_COPY_BACKUP
991833 5 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
Tag: IMAGE_COPY_BACKUP
991837 7 A 01-DEC-15 6146800878535 01-DEC-15
Name: /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
Tag: IMAGE_COPY_BACKUP
And then thinking all is good I tried to refresh that copy, this is what happened:
RMAN> backup incremental level 1 for recover of copy with tag 'image_copy_backup' database;
Starting backup at 01-DEC-15
using channel ORA_DISK_1
no parent backup or copy of datafile 7 found
no parent backup or copy of datafile 2 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 5 found
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=+DATA/dbarep1/datafile/sash.286.779298095
...
RMAN thinks that there is no image copy to update and tries to create a new full image copy! If you have >20TB databases it is an expensive price to pay 🙂
Lets query the data dictionary directly to see more information about the datafilecopy that was registered:
SQL> select file#, tag, incremental_level, name from v$datafile_copy where deleted='NO' order by 1;
FILE# TAG INCREMENTAL_LEVEL NAME
---------- -------------------- ----------------- -----------------------------------------------------------------------------------
1 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2
2 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc
3 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur
4 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va
5 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd
7 IMAGE_COPY_BACKUP /nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb
6 rows selected.
Incremental level is NULL! Catalog start with command did not register the datafiles as a base of incremental backup. CATALOG START WITH was a wrong command to use. To register datafilecopy properly for incremental updates, there is a separate catalog command: CATALOG DATAFILECOPY ‘filename’ LEVEL 0 TAG ‘tagname’;
First I remove the invalid registrations:
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2' uncatalog;
change datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb' uncatalog;
And then register the datafilecopy properly:
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SASH_FNO-7_r5qjk7sb' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSAUX_FNO-2_r6qjk7uc' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-SYSTEM_FNO-1_r8qjk7v2' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS1_FNO-3_r7qjk7ur' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-UNDOTBS2_FNO-4_r9qjk7va' level 0 tag 'IMAGE_COPY_BACKUP';
catalog datafilecopy '/nfs/backup/dbarep1/data_D-DBAREP1_I-1714430310_TS-USERS_FNO-5_raqjk7vd' level 0 tag 'IMAGE_COPY_BACKUP';
After that incremental update worked again.
- Written by: ilmarkerm
- Category: Blog entry
- Published: July 19, 2015
I recently had an interesting challenge to direct data guard network traffic over a dedicated network interface and not go though the main network routes. The reason was that in order to reach the remote datacenter, the dataguard traffic should be sent via WAN optimizer network, that will automatically compress and dedup all traffic (hard to believe, that there are still some places left in Europe, where internet bandwidth is extremely expensive). I know that Oracle since 11.2 can do redo compression for data guard traffic also, but this requires advanced compression option and also uses database host CPU-s to do the compression, WAN optimizer software licenses are cheaper and usable for other tasks as well.
The following tasks need to be completed:
- Configure a new network interface in OS. I’m not going to discuss it here, but it is a prerequisite for any tasks to be done in Oracle.
- Configure the new network for Oracle listener. If you are using standalone database and haven’t got some special configuration for you listener, then it should already listen on the new network interface also. RAC needs special attention and it is described in this blog post. Basically you need to configure new set of VIP addresses, create a new network resource in clusterware and configure database instance to register on the new network. SCAN address cannot be created on the additional network.
- Configure Data Guard to communicate over the newly created network. The necessary steps are described below and I’m assuming that Data Guard Broker is in use.
My database setup is the following: primary database is RAC (prd_db_rac) and it has two physical standbys, one local (prd) and one remote (prd_wan1).
$ ORACLE_SID=prd dgmgrl /
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration
Configuration - prd
Protection Mode: MaxPerformance
Databases:
prd_db_rac - Primary database
prd - Physical standby database
prd_wan1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
Now lets look at the properties of one of the databases.
DGMGRL> show database verbose prd_wan1;
Database - prd_wan1
Enterprise Manager Name: prd_wan1.prod.example.com
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Apply Rate: 3.08 MByte/s
Real Time Query: ON
Instance(s):
prd
Properties:
DGConnectIdentifier = '//oc1dg2.wan1.example.com/prd_wan1.prod.example.com'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
SidName = 'prd'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oc1dg2.wan1.example.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=prd_wan1_DGMGRL.prod.example.com)(INSTANCE_NAME=prd)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'
Database Status:
SUCCESS
Each database in the broker configuration has a property telling how other databases in the same broker configuration can connect and send redo to it. This parameter is the first on the list – DGConnectIdentifier. All we need to do is change this property to point to an address in the new network. For single instance databases this is easy task, we can use the easy connect syntax, so for my case need to change the physical standby connection string as follows (sb. prefix in host names are located in the new wan optimized network).
DGMGRL> EDIT DATABASE prd SET PROPERTY 'DGConnectIdentifier'='//sb.oc1dg1.prod.example.com/prd.prod.example.com';
Property "DGConnectIdentifier" updated.
DGMGRL> EDIT DATABASE prd_wan1 SET PROPERTY 'DGConnectIdentifier'='//sb.oc1dg2.wan1.example.com/prd_wan1.prod.example.com';
Property "DGConnectIdentifier" updated.
The same property needs to be changed for primary database also, but short SCAN address cannot be created for the secondary network and in broker the property value string cannot be long enough to hold the full RAC connection string for my configuration. So I needed to add the following entry to tnsnames.ora for all Oracle Homes involved in that configuration (both primary and all standbys). Here again, sb part of the DNS name refers to IP on the new network.
PRD_DB_RAC_SB =
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS= (PROTOCOL = TCP)(HOST = oc1n01-sb-vip.prod.example.com)(PORT = 1521))
(ADDRESS= (PROTOCOL = TCP)(HOST = oc1n02-sb-vip.prod.example.com)(PORT = 1521))
(ADDRESS= (PROTOCOL = TCP)(HOST = oc1n03-sb-vip.prod.example.com)(PORT = 1521))
(ADDRESS= (PROTOCOL = TCP)(HOST = oc1n04-sb-vip.prod.example.com)(PORT = 1521))
(FAILOVER = on)
) (CONNECT_DATA=
(SERVICE_NAME = prd_db_rac.prod.example.com)
))
Now I can make the same Broker property change for primary RAC database also.
DGMGRL> EDIT DATABASE prd_db_rac SET PROPERTY 'DGConnectIdentifier'='PRD_DB_RAC_SB';
Property "DGConnectIdentifier" updated.
That was all, no need to restart anything, after property change Broker changes automatically the instance parameters itself. We did this change in order to replace the old SAN based replication to remote datacenter with Data Guard and to save on bandwidth we needed to run the Data Guard traffic through separate WAN optimized network. On the average we see that the WAN optimizer compresses Data Guard traffic about 3-4 times and compared to the old SAN replication setup (SAN did compress the change log also internally before sending it to the remote site) we use about 2 times less bandwidth (or actually more, since we were previously limiting SAN replication bandwidth, during the day it always ran on the limit and caught up during the night). Considering how much internet bandwidth costs on some places, the extra Data Guard license needed will pay for itself in surprisingly short time in reduced Internet bandwidth alone (not calculating the extra features it also brings, like Active Data Guard).
- Written by: ilmarkerm
- Category: Blog entry
- Published: March 22, 2015
Since I started a new job I also wanted to try out MacBook as my new laptop. I’ve never been a Mac user, but there seems to be a large group of people who think Macs are the best, so I needed to find out the goodness of Macs myself 🙂 A quick tip: Don’t upgrade to Yosemite if you laptop is bound to Active Directory.
One part of setting up my new laptop was also setting up SQL Developer WITH Oracle Driver, because a year or two ago I rolled out radius authentication for all users who need to access the production databases directly and radius (and other non-default-password authentication schemes) do not work with thin JDBC driver. Being new to Macworld and not finding any guides for this task online, it was quite a struggle in the beginning. Here I’m trying to share my experiences if anybody else find themselves in the similar situation. If you find any mistakes here, please let me know – I’m new to the world of MacOS 🙂
Jan Karremans just today published a similar article about Mac and Instant client, but the goal of my blog post is to get SQL Developer connected through Instant Client.
JDK
First you need Java installed, I installed 64-bit JDK8, but it would be better to install 64-bit JDK7, since SQL Developer 4.0 is not supported on JDK8.
SQL Developer
When I’m writing this, SQL Developer 4.1 is in Early Adopter release and 4.0 is in production. I installed SQL Developer 4.0.3.
Oracle Instant Client
There is another reason to choose SQL Developer 4.0 over 4.1 – 4.1 requires version 12.1 of the Oracle Client and that is not yet released for MacOS. SQL Developer 4.0 requires Oracle Client 11.2.0.3 and that is available for MacOS. I downloaded Instant Client 11.2.0.3 (64-bit – since I installed 64-bit JDK): instantclient-basic-macos.x64-11.2.0.3.0.zip and instantclient-sqlplus-macos.x64-11.2.0.3.0.zip to be precise. I unzipped both these files under /opt so my Instant Client installation directory is /opt/instantclient_11_2. After unzipping, Instant Client installation instructions require creating the following symlinks:
Configuring SQL Developer
Open SQL Developer, open Preferences and go to Database > Advanced. Configure Oracle Client paths and check Use Oracle Client and Use OCI/Thick driver boxes. Here is the sample screenshot from my settings.
When you press the Configure button to set the Oracle client path, there is also a button to Test if the Client was loaded correctly. Currently this test should fail.
Setting the environment variables
We need to set two environment variables: DYLD_LIBRARY_PATH to make sure applications can find and load the instant client libraries and TNS_ADMIN so instant client could find sqlnet.ora and tnsnames.ora files. In my case sqlnet.ora is essential, since I need to use Radius authentication.
First I added these environment variables to ~/.profile:
After restarting terminal and verifying that these environment variables are indeed set, execute SQL Developer app directly from terminal by executing open /Applications/SQLDeveloper.app/.
Open preferences and press the Oracle Client Test button again, now the test should succeed and SQL Developer is connected through Instant Client.
All good! Great success! Guess again… Close SQL Developer and launch it from Docker/Launchpad and you will see the Oracle Client test failing again. This is because the environment variables under ~/.profile or ~/.bash_profile are loaded only inside Terminal.
Setting environment variables on login
This seems to be the most complex part of this story since it seems to change with every MacOS release. The method that is working for me under Yosemite (and in no way am I declaring that it is the best method, I am a Mac newbie after all), is this:
Open application Script Editor and paste the following AppleScript there:
Yes, it is also necessary to set TNS_ADMIN, since setting the TNS admin directory path under SQL Developer preferences does not work if you need to use specific settings in sqlnet.ora, for example authentication.
Save the script and in the save dialog set File format: Application. And finally System settings -> Users & Groups -> Login items press + sign and select the application you just saved. Now, log out from MacOS and log back in. Now SQL Developer should work with Instant Client even when you launch it from Launchpad.
I got this method from a Stackoverflow thread.
- Written by: ilmarkerm
- Category: Blog entry
- Published: January 1, 2015
Before storage device can be removed (or unassigned) from server, it must first be removed from Linux. This is even more necessary if the server is running Oracle ASMLib, because if there are any removed or read-only storage devices present when running service oracleasm scandisks, the command will just hang and server load will start to increase (maximum I’ve seen was 3500).
Here you find RedHat instructions on how to cleanly remove storage devices in RHEL6. A lot of steps, especially when running multipath and having multiple paths to each device.
Since we do database replication and cloning to test using storage replication, then assigning and unassigning storage devices from test servers is quite frequent and if not done properly and cleanly, we have to reboot the server. Here is a little script to help with that task. It takes ASMLib disk names to be removed as an argument, finds all paths to the device and removes them from Linux.
Tested in OEL5 (using RHEL kernel) and OEL6 (using UEK2 kernel). This script only deals with dm-* and sd* devices, so before using please check and modify the script for your environment. Before removing the storage devices from Linux, make sure that the disk groups are dismounted from ASM and they are not used.
- Written by: ilmarkerm
- Category: Blog entry
- Published: December 28, 2014
Most (all?) of the Oracle ZFS Storage Appliance (ZS3-4 in my case) management functions are also exposed through REST API so it is possible to script and automate the storage management tasks. No proprietary clients or GUI needed, just need to write a simple script that can send some REST commands over HTTPS to the appliance and parse the JSON output.
I had a task of automating test database refresh from production and part of that workflow is also cloning and attaching LUNs from ZFSSA. I did not find any sample code to access ZFSSA online, so here am I publishing my code. It is a Python script that makes a use of a few ZFSSA REST API calls and publishes them as Fabric tasks.
This script exposes four tasks:
- clone – Creates a new clone from replication
- attach_stage1 – Attaches all LUNs in a project to a single target server
- attach_stage2 – Attaches all LUNs in a project to multiple target servers
- drop – Deletes a project
ZFSSA REST API works fine if each LUN has only a single initiator group, but when assigning LUN to multiple initiator groups (to be used in a RAC setup), I observed the following errors (they must be ZFSSA REST API bugs):
- When modifying LUN to be part of multiple initiator groups (PUT request to /api/storage/v1/pools/poolname/projects/projectname/luns/lunname), then the API responds with (argument numbers may be different): {“fault”: {“message”: “invalid input argument (assignednumber: invalid literal for int() with base 10: ‘0,0,0’)”, “code”: 400, “name”: “ERR_INVALID_ARG”}}. Nevertheless, ZFS GUI confirms that the operation was completed successfully. This script just ignores the returned error message, when assigning LUN to multiple initiator groups.
- If any LUN in the project has multiple initiator groups defined, then API call to get the list of LUNs (GET request to /api/storage/v1/pools/poolname/projects/projectname/luns) in a project returns an error immediately: {“luns”:[{“fault”: {“message”: “internal error”, “code”: 500, “name”: “ERR_INTERNAL”}}]}. This script cannot reattach the LUNs in this case, since getting the list of LUNs fails.