Ilmar Kerm

Oracle, databases, Linux and maybe more

Oracle recently released a thin driver for Python, meaning it does not require any Oracle client installation (like cx_Oracle does) and it is written natively in Python – one benefit of which is that it works on all platforms where Python is present, including ARM64.

You can read more about it here: https://levelup.gitconnected.com/open-source-python-thin-driver-for-oracle-database-e82aac7ecf5a

Interestingly it is not yet trivial to find examples, on how to use the new thin driver to connect to Oracle Autonomous Database.

By default Autonomous database requites mTLS connections, so first create and download the Instance wallet for your Autonomous database (zip file), do remember the password you set. Unzip and transfer ewallet.pem and tnsnames.ora to your Python host.

# Create directory for wallet files
$ mkdir wallet
$ pwd
/home/ilmar/wallet

# Transport ewallet.pem and tnsnames.ora to that directory
$ ls /home/ilmar/wallet/
ewallet.pem
tnsnames.ora

If you have not yet done so, install the pythoin thin driver.

$ pip3.9 install --user oracledb
Collecting oracledb
  Downloading oracledb-1.1.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (6.7 MB)
     |████████████████████████████████| 6.7 MB 24.1 MB/s 
Collecting cryptography>=3.2.1
  Downloading cryptography-38.0.1-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl (3.7 MB)
     |████████████████████████████████| 3.7 MB 34.9 MB/s 
Collecting cffi>=1.12
  Downloading cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (448 kB)
     |████████████████████████████████| 448 kB 90.9 MB/s 
Collecting pycparser
  Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
     |████████████████████████████████| 118 kB 119.8 MB/s 
Installing collected packages: pycparser, cffi, cryptography, oracledb
Successfully installed cffi-1.15.1 cryptography-38.0.1 oracledb-1.1.1 pycparser-2.21

Connect from Python code to Autonomous database, connect.py

import oracledb, os

db = oracledb.connect(
       user=os.environ['ORACLE_USER'],
       password=os.environ['ORACLE_PASSWORD'],
       dsn=os.environ['ORACLE_TNS'],
       config_dir="/home/ilmar/wallet",
       wallet_location="/home/ilmar/wallet",
       wallet_password="wallet_password_here")

with db.cursor() as c:
    for row in c.execute("SELECT owner, object_name FROM all_objects FETCH FIRST 10 ROWS ONLY"):
        print(row)

db.close()

Execute

export ORACLE_USER=my_database_username
export ORACLE_PASSWORD=my_database_password
export ORACLE_TNS=connection_identifier_from_tnsnames.ora

$ python3.9 connect.py 
('SYS', 'ORA$BASE')
('SYS', 'DUAL')
('PUBLIC', 'DUAL')
('PUBLIC', 'MAP_OBJECT')
('SYS', 'SYSTEM_PRIVILEGE_MAP')
('SYS', 'I_SYSTEM_PRIVILEGE_MAP')
('PUBLIC', 'SYSTEM_PRIVILEGE_MAP')
('SYS', 'TABLE_PRIVILEGE_MAP')
('SYS', 'I_TABLE_PRIVILEGE_MAP')
('PUBLIC', 'TABLE_PRIVILEGE_MAP')

Here I used Oracle Linux 8 sever running on Ampere (ARM64) in Oracle Cloud Always Free Tier. No Oracle database client or Instantclient was installed.

Most (all?) of the Oracle ZFS Storage Appliance (ZS3-4 in my case) management functions are also exposed through REST API so it is possible to script and automate the storage management tasks. No proprietary clients or GUI needed, just need to write a simple script that can send some REST commands over HTTPS to the appliance and parse the JSON output.

I had a task of automating test database refresh from production and part of that workflow is also cloning and attaching LUNs from ZFSSA. I did not find any sample code to access ZFSSA online, so here am I publishing my code. It is a Python script that makes a use of a few ZFSSA REST API calls and publishes them as Fabric tasks.

This script exposes four tasks:

  • clone – Creates a new clone from replication
  • attach_stage1 – Attaches all LUNs in a project to a single target server
  • attach_stage2 – Attaches all LUNs in a project to multiple target servers
  • drop – Deletes a project

#!/usr/bin/env fab
from fabric.api import task,hosts,settings,env
from fabric.utils import abort
import requests, json, os
from datetime import date
today = date.today()
# ZFSSA API URL
url = "https://test1c1n1.test1.company.com:215"
# ZFS project name, where the clone will be stored, it is created during clone operation
zfsproject = "TEST_REFRESH-%s" % today.__format__('%Y%m%d')
# Replication source name
repsource = "prodc1n1"
# Replication package ID
reppackage = "6a7e390e-b159-4d12-d111-a519f0f5168e"
# ZFSSA authentication credentials, it reads username and password from environment variables ZFSUSER and ZFSPASSWORD
zfsauth = (os.getenv('ZFSUSER'), os.getenv('ZFSPASSWORD'))
jsonheader={'Content-Type': 'application/json'}
# This gets the pool name and last sync timestamp
def read_pool():
global zfspool, lastsync, zfsname
# Read the replication package properties
r = requests.get("%s/api/storage/v1/replication/sources/%s/packages/%s" % (url, repsource, reppackage), auth=zfsauth, verify=False, headers=jsonheader)
if r.status_code != 200:
abort("Error getting information about the specified replication package: %s %s" % (r.status_code, r.text))
j = json.loads(r.text)
lastsync = j["package"]["last_sync"]
# Read the project in replication package to get the pool name
for proj in j["package"]["projects"]:
r2 = requests.get("%s%s" % (url, proj["href"]), auth=zfsauth, verify=False, headers=jsonheader)
if r2.status_code != 200:
abort("Error getting the replication project details: %s %s" % (r2.status_code, r2.text))
j2 = json.loads(r2.text)
zfspool = j2["project"]["pool"]
zfsname = j2["project"]["name"]
break # Only want the first project
@task
def clone():
read_pool()
print "Last sync time: %s" % lastsync
# First check is the target project name already exists
r = requests.get("%s/api/storage/v1/pools/%s/projects/%s" % (url, zfspool, zfsproject), auth=zfsauth, verify=False, headers=jsonheader)
if r.status_code != 404:
abort("ZFS project %s already exists (or other error): %s" % (zfsproject, r.status_code))
#
print "Cloning %s to project %s" % (zfsname, zfsproject)
payload = { 'target_project': zfsproject }
r3 = requests.put("%s/api/storage/v1/replication/sources/%s/packages/%s/clone" % (url, repsource, reppackage), auth=zfsauth, verify=False, data=json.dumps(payload), headers=jsonheader)
if r3.status_code == 202:
print "Cloned"
else:
abort("ZFS Clone failed: %s" % r3.text)
# Snapshot target project to preserve the initial state
payload = { 'name': 'initial-backup' }
r = requests.post("%s/api/storage/v1/pools/%s/projects/%s/snapshots" % (url, zfspool, zfsproject), auth=zfsauth, verify=False, headers=jsonheader, data=json.dumps(payload))
if r.status_code == 201:
print "Initial snapshot created"
else:
abort("Creating initial snapshot failed")
def attach_luns(initgroups):
read_pool()
print "Attaching LUNs in project %s" % zfsproject
r = requests.get("%s/api/storage/v1/pools/%s/projects/%s/luns" % (url, zfspool, zfsproject), auth=zfsauth, verify=False, headers=jsonheader)
if r.status_code == 200:
j = json.loads(r.text)
payload = { 'status': 'online', 'targetgroup': 'iscsi-all', 'initiatorgroup': initgroups }
for lun in j["luns"]:
if "fault" in lun:
abort("Fault received from ZFSSA: %s" % r.text)
print lun["name"]
rx = requests.put("%s/api/storage/v1/pools/%s/projects/%s/luns/%s" % (url, zfspool, zfsproject, lun["name"]), auth=zfsauth, verify=False, headers=jsonheader, data=json.dumps(payload))
if rx.status_code == 202 or len(initgroups) > 1: # The "or len(initgroups) > 1" part is to fix ZFSSA api bug, when initgroups is > 1, then it reports internal error, but completes the task
print "Attached"
else:
abort("Attaching failed: %s" % rx.text)
else:
abort("Getting list of LUNs failed: %s" % r.text)
@task
def attach_stage1():
# Attach all LUNs in a project to a single initiator group "server1-iscsi"
attach_luns(["server1-iscsi"])
@task
def attach_stage2():
# Attach all LUNs in a project to a multiple initiator groups (for example to use them as shared storage in a RAC setup)
attach_luns(["cluster1n01-iscsi", "cluster1n02-iscsi", "cluster1n03-iscsi"])
# CLEANUP
@task
def drop_initial_snapshot():
read_pool()
print "Dropping initial snapshot"
r = requests.delete("%s/api/storage/v1/pools/%s/projects/%s/snapshots/initial-backup" % (url, zfspool, zfsproject), auth=zfsauth, verify=False, headers=jsonheader)
if r.status_code == 204:
print "Initial snapshot dropped"
else:
abort("Dropping initial snapshot failed")
@task
def drop():
read_pool()
print "Dropping project %s" % zfsproject
r = requests.delete("%s/api/storage/v1/pools/%s/projects/%s" % (url, zfspool, zfsproject), auth=zfsauth, verify=False, headers=jsonheader)
if r.status_code == 204:
print "OK"
else:
abort("Dropping project failed: %s" % r.text)

ZFSSA REST API works fine if each LUN has only a single initiator group, but when assigning LUN to multiple initiator groups (to be used in a RAC setup), I observed the following errors (they must be ZFSSA REST API bugs):

  • When modifying LUN to be part of multiple initiator groups (PUT request to /api/storage/v1/pools/poolname/projects/projectname/luns/lunname), then the API responds with (argument numbers may be different): {“fault”: {“message”: “invalid input argument (assignednumber: invalid literal for int() with base 10: ‘0,0,0’)”, “code”: 400, “name”: “ERR_INVALID_ARG”}}. Nevertheless, ZFS GUI confirms that the operation was completed successfully. This script just ignores the returned error message, when assigning LUN to multiple initiator groups.
  • If any LUN in the project has multiple initiator groups defined, then API call to get the list of LUNs (GET request to /api/storage/v1/pools/poolname/projects/projectname/luns) in a project returns an error immediately: {“luns”:[{“fault”: {“message”: “internal error”, “code”: 500, “name”: “ERR_INTERNAL”}}]}. This script cannot reattach the LUNs in this case, since getting the list of LUNs fails.