Ilmar Kerm

Oracle, databases, Linux and maybe more

The collection of Bash scripts that help to manage multiple MySQL instances inside cluster managed by Oracle Clusterware 11gR2 in order to achieve high availability. Each MySQL instance running inside this cluster is configured with a dedicated IP address (Virtual IP resource in clusterware).

The scripts manage:

Initializing MySQL data directory and configuring resources in Oracle Clusterware.
Clusterware actionscript to manage instance startup, shutdown, cleanup and check.
All scripts are built with support of running multiple MySQL instances in the same cluster.
Each MySQL instance can run with different MySQL software version.
Automatic MySQL error/slow/general log rotation and archival.
Although these scripts are written with clusterware in mind, they can also be used without it – to help managing multiple MySQL instances (with different RDBMS versions) running in the same host.

Download the scripts from github
View my usergroup presentation “Making MySQL highly available using Oracle Grid Infrastructure”

Purpose and requirements


These scripts are written to help running multiple MySQL instances (with multiple MySQL software versions) in a clustered environment managed by Oracle Clusterware 11gR2.

Oracle Clusterware is included in the Oracle Unbreakable Linux Basic Support package, with no additional cost.

Design Requirements

  • Multiple MySQL instances running in the same cluster, in case of node failure affected MySQL instances are moved to any other surviving node (least loaded)
  • Different MySQL instances may run different MySQL RDBMS software versions
  • Each MySQL instance is listening to its own dedicated and clusterware managed IP address (so MySQL can always respond on default 3306 port and can move independently around nodes without any dependencies to other MySQL instances)
  • Clusterware monitors each MySQL instance and in case of trouble automatically moves IP and starts instance on another cluster node

Software Requirements

The scripts have been tested with:

  • Oracle Clusterware, and
  • Oracle Enterprise Linux 5 and 6
  • MySQL 5.5, 5.6 Enterprise (tar package)
  • ACFS filesystem (bundled with Oracle Clusterware 11gR2)

Architecture overview


This document describes how to use Oracle Clusterware 11gR2 for making MySQL instances highly-available.
Oracle Clusterware is included in the Oracle Unbreakable Linux Basic Support package.

Simple overview

Oracle Clusterware is mainly used for clustering Oracle Databases (Real Application Clusters – RAC), but in can also be used for making other applications run in a clustered environments using custom scripts.

  • All MySQL files, including RDBMS software and data directories are located in a clustered filesystem, accessible read-write from all cluster nodes.
  • For each instance, there is a registered virtual IP resource in clusterware.
  • Each instance is configured as a resource in clusterware with hard dependency to corresponding virtual IP resource.
  • Clusterware calls configured actionscript for actions: start, stop, check, clean


Filesystem overview

All the scripts are prepared keeping in mind, that all cluster nodes use shared storage and the same filesystem structure. Filesystem on shared storage should be mounted read-write on every node.

I have used ACFS clustered filesystem for all shared filesystems, but if you have a reliable NAS device, then using NFS is also an option.

Directory structure

The following directory structure is just a recommendation.

Basically there are two different types of filesystem mountpoints:
* Shared filesystem for MySQL software and clustering scripts. This requires very little space, 10-15 GB filesystem is enough. Also take into account that each version of 5.5 MySQL advanced edition tar-ball is over 600MB.
* Shared filesystem for MySQL instance data-directories. This would store the databases. I’d like to use a separate LUN/filesystem for each MySQL instance.

/u02 Mountpoint for software directory
/u02/app/mysql/ Location for scripts
/u02/app/mysql/product/5.5.17/advanced Location for MySQL RDBMS software version 5.5.17 advanced edition
/u02/app/mysql/product/5.5.16/advanced Location for MySQL RDBMS software version 5.5.16 advanced edition
/u02/app/mysql/product/5.5.16/community Location for MySQL RDBMS software version 5.5.16 community edition
/instance/instance_name Mountpoint for MySQL data-directory filesystem
/instance/instance_name/config/my.cnf mysqld configuration file for instance instance_name
/instance/instance_name/data Data directory for MySQL instance instance_name
/instance/instance_name/logs Error/general/slow logs for instance instance_name. Logs are rotated and archived by scripts automatically.

Creating and mounting ACFS volumes

Creating ACFS filesystem for software:

-- todo

Creating ACFS filesystem for MySQL instance data-directory:

-- todo

Resource dependencies to ACFS clusterware resource (ma ke sure mysql instance is stared after filesystems are mounted):

-- todo


Execute everything as user root.

Create user for running mysql

adduser mysql
passwd -l mysql

Copy all scripts to /u02/app/mysql
Set execute flag on script files

chown a+x action_handler.scr
chown a+x
chown a+x

Edit, this is a configuration file. Look over all global parameters.
All managed MySQL instances are configured in as a configuration block like this (NB! This is a BASH file, so all variables must follow BASH rules). Please take a look at AddNewInstance:


Download MySQL RDBMS software tar-release and unpack it to (assuming its 5.5.17 advanced edition) /u02/app/mysql/product/5.5.17

mkdir -p /u02/app/mysql/product/5.5.17
cd /u02/app/mysql/product/5.5.17
tar xzf mysql-advanced-5.5.17-linux2.6-x86_64.tar.gz
mv mysql-advanced-5.5.17-linux2.6-x86_64 advanced

Create mysql client defaults file with MySQL root password. This is needed to execute MySQL console in local server without password.

touch /root/.my.cnf
chmod 600 /root/.my.cnf
vi /root/.my.cnf

Example contents of /root/.my.cnf file:



Execute from crontab regularly (for example once per day or once per week) on every cluster node.

Adding a new node

2.1. Add new MySQL instance automatically

The easiest way to add new instance is by using the script and after that run to add the newly added instance to clusterware. The scripts do the followings tasks automatically:

  • Initializes MySQL data directory
  • Creates MySQL user for clusterware monitoring
  • Updates configuration file
  • Adds resources in clusterware –

All options:

This script initializes MySQL data directory and all cluster scripts.
Script must be run as user root.
-i name
        Instance name. Name should be 20 characters or less and only characters a-z, A-Z, 0-9 and _ are allowed
        (\w character class)

MySQL and instance settings:

-s directory
        MySQL software location (where tar-edition is unpacked)
-b ipaddr
        IP address to bind the instance
-a basedirectory
        Instance base directory, this replaces options -d -c -l by setting the following values
        -d basedirectory/data, -c basedirectory/config/my.cnf, -l basedirectory/logs
-d directory
        Empty MySQL data directory to initialize
-c file
        MySQL config file for this instance
-l directory
        MySQL error/slow/general logfile directory for this instance

Additional configuration:

        Do not update
        Do not set mysql root password and clusterware user
        (creating user requires starting mysqld and default password specified in /root/.my.cnf)
        Add resources to clusterware (VIP and application)

To create MySQL instance sample_instance under /instance/sample_instance/data, config file located in /instance/sample_instance/config/my.cnf and logs located under /instance/sample_instance/logs execute the following command:

[root@dbmc1n01 mysql]# mkdir -p /instance/sample_instance/data
[root@dbmc1n01 mysql]# mkdir -p /instance/sample_instance/config
[root@dbmc1n01 mysql]# mkdir -p /instance/sample_instance/logs
[root@dbmc1n01 mysql]# touch /instance/sample_instance/config/my.cnf
[root@dbmc1n01 mysql]# ./ -i sample_instance -a /instance/sample_instance \
> -b -s /u02/app/mysql/product/5.5.20/advanced
Installing MySQL system tables...
Filling help tables...

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

To do so, start the server, then issue the following commands:

/u02/app/mysql/product/5.5.20/advanced/bin/mysqladmin -u root password 'new-password'
/u02/app/mysql/product/5.5.20/advanced/bin/mysqladmin -u root -h password 'new-password'

Alternatively you can run:

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /u02/app/mysql/product/5.5.20/advanced ; /u02/app/mysql/product/5.5.20/advanced/bin/mysqld_safe &

You can test the MySQL daemon with
cd /u02/app/mysql/product/5.5.20/advanced/mysql-test ; perl

Please report any problems with the /u02/app/mysql/product/5.5.20/advanced/scripts/mysqlbug script!

Starting MySQL...
Creating 'clusterware'@'localhost' user...
Changing root password...
Stopping MySQL...
Updating cluster configuration /u02/app/mysql/
Symlinking actionscript sample_instance.scr...

Execute to add newly added instance to clusterware. You can skip this step to add the resources using default values by specifying -g option to previously:

All options you can use with the script:

[root@dbmc1n01 mysql]# ./

This script initializes MySQL instance in Oracle Clusterware, by creating virtual IP resource and application resource.
Script must be run as user root.
-i name
        Instance name. Name should be 20 characters or less and only characters a-z, A-Z, 0-9 and _ are allowed (\w character class)
-t type (default cluster_resource)
        Resource type
-p pool name (default Generic)
        Server pool name
-l number (default 10)
        Instance load attribute

To add the new instance using resource type mysql_instance (there you can set check interval and other parameters) and to server pool pte use the following command:

[root@dbmc1n01 mysql]# ./ -i sample_instance -t mysql_instance -p pte

2.2. Add new MySQL instance manually

NB! All these tasks can be done automatically using script

Execute everything as user root.
I assume that the newly added instance is named sample_instance.

Create mysqld config file in /instance/sample_instance/config/my.cnf. This is regular mysql configuration file. Do not specify the following parameters in the configuration file (they will be ignored, because they’ll be set in the command line on startup):


Initialize MySQL data directory /instance/sample_instance/data

chown mysql:mysql /instance/sample_instance/data
/u02/app/mysql/product/5.5.17/advanced/scripts/mysql_install_db \
  --skip-name-resolve \
  --basedir=/u02/app/mysql/product/5.5.17/advanced \
  --datadir=/instance/sample_instance/data \

Add instance configuration to /u02/app/mysql/


Symlink actionscript sample_instance.scr

cd /u02/app/mysql
ln -s action_handler.scr sample_instance.scr

Start mysql

/u02/app/mysql/sample_instance.scr start

Check MySQL log files and ensure that MySQL process started.
Start MySQL console and create clusterware user (reload privilege is needed for logrotate feature):


 *******   Wed Nov  9 12:50:03 CET 2011 **********
Action script '' for resource[] called for action
Handler script: ./ mysql instance sample_instance
Usage: ./ {start|stop|check|clean}. All other arguments execute MySQL Console.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.5.17-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create user 'clusterware'@'localhost' identified by 'cluster123';
mysql> grant reload on *.* to 'clusterware'@'localhost';

Check pinging MySQL, if exit value is 0, then all is good and instance is working.

/u02/app/mysql/sample_instance.scr check

 *******   Wed Nov  9 13:06:03 CET 2011 **********
Action script '' for resource[] called for action check
Handler script: ./ mysql instance sample_instance
Exit value: 0

Create Virtual IP resource in clusterware

appvipcfg create -network=1 -ip= -vipname=sample_instance_vip -user=root
crsctl setperm resource sample_instance_vip -o root
crsctl setperm resource sample_instance_vip -u user:grid:r-x

Create application resource in clusterware

crsctl add resource sample_instance -type cluster_resource \
-attr "ACTION_SCRIPT=/u02/app/mysql/sample_instance.scr, PLACEMENT='balanced', LOAD=10, CHECK_INTERVAL='5', RESTART_ATTEMPTS='2', START_DEPENDENCIES='hard(sample_instance_vip)', STOP_DEPENDENCIES='hard(sample_instance_vip)'"
crsctl setperm resource sample_instance -o root
crsctl setperm resource sample_instance -u user:grid:r-x


crsctl start resource sample_instance
crsctl stop resource sample_instance -f
crsctl relocate resource sample_instance -f


The package includes script to automatically rotate and expire MySQL logfiles for all managed instances. It depends on the standard linux logrotate package (included in RHEL/OEL distributions).

This script is expected to run on all cluster nodes at the same time from crontab. The script requires no parameters.

  • Try to acquire a cluster-wide lock, the purpose of this is to ensure, that only one node will do the actual file rotation while other nodes just wait for it to complete.
  • If lock was acquired, the script will look into all managed instance log directories and will write a linux logrotate configuration file dynamically and execute logrotate using generated configuration file.
  • If lock was not acquired, then wait until the lock is released by another node and then continue with flushing.
  • And finally, after log files have been rotated, all nodes will execute mysqladmin flush-logs on all managed instances.