So your RAC database hangs/has occasional stalls/you want to do an emergency reboot?
Franck Pachot has written a good article what traces to get for troubleshooting or for Oracle support
I had a problem – all RAC instances seem to “stall” occasionally so I wanted to execute Francks script on all instances at the same time when the problem was happening.
Ansible to the rescue.
First I pushed out the following script to all instances, this script actually does the diagnostics dump.
Then the Ansible playbook to execute the script above on all instances at the same time and afterwards download all traces to your local ansible controller host:
This post is continuing my previous port about modifying SQLDeveloper preferences with ansible. Building on the same motivation and technique my goal in thist post is to centrally push out and keep updated connection details for SQLDeveloper on client side.
First lets declare the connections we want to push out:
NB! I’m pushing out connections referring to TNS names, since I want to add some extra RAC related settings to each connection description.
First need to create connection.yml that will contain tasks to add a single connection to SQL Developer. This file will be called for every connection from the main playbook.
Now the main playbook.
NB! This is just an extract from the playbook. I expect you are familiar with ansible and know how to put all these three files together 🙂
Managing Oracle Database homes and patching them on a large scale is a challenge. Patching is a must today due to all the security threats out there plus all the bugs that you will hit during normal database operations.
You can read about the challenges and solutions in Ludovico Caldara blog series
Here I’d like to share my solution. The main idea of this solution is simple:
Never patch existing Oracle home, even when you just need to apply tiny one-off patch. Always install a new home and eventually remove the old one.
It is not possible to execute this strategy in a large environment without automation and here I’m sharing my automation solution using Ansible.
Features of this solution:
- Oracle home configurations become code
- Runs over any number of clusters or single hosts, with same configuration in parallel
- Maintain list of homes or flavours of homes each cluster/single host is having installed or what need to be removed
- Oracle Grid infrastructure or Oracle Restart installation is required
- Fully automated, up to the point that you have a job in Jenkins that is triggered by push to version control system (git)
- Home description in Ansible variable file also servers as documentation
- All tasks are idempotent, so you can execute playbook multiple times. If the servers already have the desired state, nothing will be changed
Ideal workflow to install a new home:
- Describe in Ansible variable file the home name, base installer location and list of patches needed
- Attach home name to clusters/hosts in Ansible files
- Commit and push to git
- Go through your typical git workflows to push the change into release branch, create pull requests, have them reviewed by peers, merge pull request into release branch
- Job in jenkins triggers on push to release branch in git and then executes ansible playbook in target/all hosts
Some background info: We need to provide direct database access to many developers and data analysts. On database side managing so many different users securely means you have to use the same authentication source as all the other systems in the company, for example Active Directory. For databases we have implemented using Radius and for Oracle database this means users have to use thick Oracle client on their computers.
Installing Oracle client is not a trivial task for many users and the MacOS instantclient that Oracle provides has been very dependent on the exact MacOS version. In order to provide some help to overcome these problems I decided to create a small Virtualbox VM, that users could just download and run. The VM would be based on Fedora Linux, have preconfigured Oracle Instantclient with common client tools like SQLPlus and SQL Developer.
This kind of VM needs to be maintained and to push out changes I already set up a git respository containing ansible playbook. In VM, under root user I checked out this git respository and set up regular cron job to run one specified playbook locally.
Hourly crontab job looks something like this:
cd ansible-sqldeveloper-vm && git pull && ansible-playbook local.yml
This playbook can be used to push out changes to application level also, for example maintaining common tnsnames.ora file to include newly created databases.
It did not take long to discover first bug with the setup – SQL Developer under Linux “looses” text cursor in editor window very easily, it is just hidden. To change that users need to go to SQL Developer preferences and change how text cursor looks like. I don’t want users to fix this common error themselves.
Another issue, new SQL Developer version came out. It is easy to push out new RPM using Ansible but SQL Developer does not start immediately after that, users have to configure there to find JDK first. Another thing I really don’t want my users to deal with.
Ansible to the rescue. All I needed to do was commit the necessary changes to my local.yml playbook and update it in the git repository.
Lets take the easier issue of telling newly installed SQL Developr where to find JDK first.
SQL Developer 17.4 rads the JDK location from file $HOME/.sqldeveloper/17.4.0/product.conf so /home/sqldeveloper/.sqldeveloper/17.4.0/product.conf in my case. This file contains only one important line telling the location of JDK:
All I need to do is use ansible copy module to create this file.
Now the other problem, I need to change specific SQL Developer preferences. When I did the changes required on a test system I found out that SQL Developer wrote it preferences to XML file $HOME/.sqldeveloper/system220.127.116.115.2349/o.sqldeveloper/product-preferences.xml. The file location is SQL Developer version specific. The specific change was a newly added XML tag under root element:
<hash n="CaretOptions"> <value n="caretColor" v="-31744"/> <value n="insertShape" v="8"/> <value n="overwriteShape" v="5"/> </hash>
This file contains all user preferences, so I don’t really want to overwrite this entire file, I only want to add this new XML tag. This is possible using Ansible xml module where I can give XPath as an argument and the module will make sure that the provided XPath expression is present in the file.
Also I want this to work for all SQL Developer versions and not change the path every time SQL Developer is upgraded.
The resulting Ansible playbook looks like this.
First I’ll describe the needed changes in a separate Ansible parameters file just to make the playbook look cleaner.
The playbook itself:
We have hundreds of developers who need access to production database for incident management purposes. But we don’t want to use shared accounts to access the database, each user has their own access to the database that is audited and has privileges according to the users access level.
Managing all these users manually on Oracle Database side is just too much, especially that all their access details are already described in Active Directory. Wouldn’t it be nice if we can just syncronise the AD users to the database side? Luckily we have an Ansible module for this task.
First, on the database side need to create a dedicated profile for the syncronised users:
CREATE PROFILE ldap_user LIMIT password_life_time UNLIMITED;
I assume you are already familiary with Ansible, so I’ll go straight to the point.
First you need to clone the ansible-oracle-modules to your playbook directory:
git clone https://github.com/oravirt/ansible-oracle-modules library
This contains, among other goodies, a module that does exactly what is required 🙂 The module is called oracle_ldapuser.
This module requires extra python-ldap module to be installed. Install it using yum, not pip. Pip will install wrong version.
yum install python-ldap
The playbook looks like this:
Remember that it is a syncronisation, so new users are created and removed when the playbook is run.