Ilmar Kerm

Oracle, databases, Linux and maybe more

I’ve been doing lately quite many database clustering implementations; Oracle RAC and since we have many MySQL instances in production, had to find a good way to make MySQL instances highly available also.

One good solution for this is managing MySQL instances with clusterware and since we are planning to use Oracle RAC on Oracle Enterprise Linux anyway, then Oracle Clusterware is an excellent candidate for this task. Also… Oracle Clusterware is included with Oracle Enterprise Linux at no additional charge.

Requirements I had:

  • Multiple MySQL instances running in the same cluster, in case of node failure affected MySQL instances are moved to any other surviving node (least loaded)
  • Different MySQL instances may run different MySQL RDBMS software versions
  • Each MySQL instance is listening to its own dedicated and clusterware managed IP address (so MySQL can always respond on default 3306 port and can move independently around nodes without any dependencies to other MySQL instances)
  • Clusterware monitors each MySQL instance and in case of trouble automatically moves IP and starts instance on another cluster node

For this task I wrote some Bash scripts to help managing MySQL instances in Oracle Clusterware environment. All these scripts are available here: Google Code project page, also with documentation of the whole setup and how to use scripts: Documentation

All comments welcome!

Tested with Centos 5 and PostgreSQL 8.4.

Sadly PostgreSQL does not have remap_schema parameter in its data import tool pg_restore, so importing tables to another schema in destination database is a little bit tricky.

Here is a Bash script I use to connect to the source database over SSH tunnel, export two tables (srv.stat_export and disp.trip) and then import them to the destination database under schema etl.

Download the script:

#!/bin/bash -l

# Open SSH tunnel to remote database
# Make sure, that passwordless authentication is enabled
ssh -C -N -L 15432:livedb2:5432 &
sleep 10s

# Set source PostgreSQL connection parameters
# Make sure that PostgreSQL tools (psql, pg_dump) can connect with these parameters without password
# if necessary use ~/.pgpass file
export PGDATABASE=db_live
export PGHOST=localhost
export PGPORT=15432
export PGUSER=exportuser

# Generate temporary filename

# Dump the data
pg_dump -t srv.stat_export -t disp.trip --file=$SCHEMAFILE -O --no-acl

# Kill SSH tunnel
kill %+

# If export was successfull, then start importing
if [ $RESULT -eq 0 ]; then
  # Replace source schema name with destination schema name
  sed -i '/^SET search_path = /s/disp,|srv,/etl,/g' $SCHEMAFILE

  # Set destination PostgreSQL connection parameters
  # Make sure that PostgreSQL tools (psql, pg_dump) can connect with these parameters without password
  export PGDATABASE=db_dev
  export PGHOST=localhost
  export PGPORT=5432
  export PGUSER=postgres

  # Drop old tables
  echo 'drop table etl.stat_export; drop table etl.trip; ' | psql -q
  # Load new tables
  psql -q < $SCHEMAFILE
  # Fix access privileges
  echo 'alter table etl.stat_export owner to etl; alter table etl.trip owner to etl;' | psql -q

# Remove temporary file