Tag: heterogeneous
- Written by: ilmarkerm
- Category: Blog entry
- Published: August 4, 2009
I got a task some time ago, to copy a-lot-of-millions of rows from Oracle to MySQL. As fast as possible.
In the last year I have used Oracle Heterogeneous Services (HSODBC) a lot to select data from different non-Oracle databases, but I had yet to try to unload data from Oracle with best performance.
My first thought was, of course, that the process needs to run in parallel. Here I’ll first take a look at the overall HSODBC performance and then look how can I make this process run in parallel.
In this test I use Oracle Database 11.1.0.7 as source, Oracle Heterogeneous Services 10.2.0.4 with MyODBC 5.1 driver, MySQL 5.1 destination database. In my setup, I have all these three components running on separate servers.
SQL> CREATE TABLE hsperftest AS SELECT * FROM all_objects UNION ALL SELECT * FROM all_objects UNION ALL SELECT * FROM all_objects; SQL> DESC hsperftest Name Null? Type ------------------------------------------- -------- --------------------- OWNER VARCHAR2(30) OBJECT_NAME VARCHAR2(30) SUBOBJECT_NAME VARCHAR2(30) OBJECT_ID NUMBER DATA_OBJECT_ID NUMBER OBJECT_TYPE VARCHAR2(19) CREATED DATE LAST_DDL_TIME DATE TIMESTAMP VARCHAR2(19) STATUS VARCHAR2(7) TEMPORARY VARCHAR2(1) GENERATED VARCHAR2(1) SECONDARY VARCHAR2(1) NAMESPACE NUMBER EDITION_NAME VARCHAR2(30) ID NOT NULL NUMBER(10)
And I create the same table in MySQL.
CREATE TABLE `hsperftest` ( `OWNER` varchar(30) DEFAULT NULL, `OBJECT_NAME` varchar(30) DEFAULT NULL, `SUBOBJECT_NAME` varchar(30) DEFAULT NULL, `OBJECT_ID` int(11) DEFAULT NULL, `DATA_OBJECT_ID` int(11) DEFAULT NULL, `OBJECT_TYPE` varchar(19) DEFAULT NULL, `CREATED` datetime DEFAULT NULL, `LAST_DDL_TIME` datetime DEFAULT NULL, `TIMESTAMP` varchar(19) DEFAULT NULL, `STATUS` varchar(7) DEFAULT NULL, `TEMPORARY` varchar(1) DEFAULT NULL, `GENERATED` varchar(1) DEFAULT NULL, `SECONDARY` varchar(1) DEFAULT NULL, `NAMESPACE` int(11) DEFAULT NULL, `EDITION_NAME` varchar(30) DEFAULT NULL, `ID` int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Set up HSODBC to connect to the MySQL database and create a database link on Oracle side. When setting up HSODBC, do not enable logging/tracing (HS_FDS_TRACE_LEVEL = off)! Tracing will serialize all database calls to the remote database.
I tested this with HSODBC running on Windows Vista and Linux (using unixODBC). The results were similar.
Test that MySQL table is accessible for Oracle:
SQL> DESC "hsperftest"@hsdb Name Null? Type ----------------------------------------------------- -------- ----------------- OWNER VARCHAR2(30) OBJECT_NAME VARCHAR2(30) SUBOBJECT_NAME VARCHAR2(30) OBJECT_ID NUMBER(10) DATA_OBJECT_ID NUMBER(10) OBJECT_TYPE VARCHAR2(19) CREATED DATE LAST_DDL_TIME DATE TIMESTAMP VARCHAR2(19) STATUS VARCHAR2(7) TEMPORARY VARCHAR2(1) GENERATED VARCHAR2(1) SECONDARY VARCHAR2(1) NAMESPACE NUMBER(10) EDITION_NAME VARCHAR2(30) ID NOT NULL NUMBER(10)
Before continuing, check that all MySQL data types are mapped correctly to Oracle. And make sure, that none of the columns are using deprecated LONG datatype.
First of all… the following SQL will not work:
SQL> insert into "hsperftest"@hsdb select * from hsperftest; insert into "hsperftest"@hsdb select * from hsperftest * ERROR at line 1: ORA-02025: all tables in the SQL statement must be at the remote database
Some PL/SQL is needed to first select from Oracle cursor and then bind the values to the remote database INSERT statement. The easiest way is:
BEGIN FOR rec IN (SELECT * FROM HSPERFTEST) LOOP INSERT INTO "hsperftest"@hsdb VALUES rec; END LOOP; COMMIT; END; / PL/SQL procedure successfully completed. Elapsed: 00:14:28.38
When running this anonymous block I noticed, that the load on Oracle database and MySQL database was very light, but the HSODBC CPU usage was at the maximum. So HSODBC seems to be the bottleneck. What is it doing? With this kind of CPU usage its not going to scale much in parallel. If its performing some mappings between Oracle SQL and MySQL SQL, then can I remove this bottleneck by using DBMS_HS_PASSTHROUGH, that sends the query unmodified to the remote database?
DECLARE p_c BINARY_INTEGER; nr INTEGER; BEGIN p_c := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@hsdb; DBMS_HS_PASSTHROUGH.PARSE@hsdb ( p_c, 'insert into hsperftest (OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID, DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS, TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,ID) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)' ); FOR rec IN (SELECT * FROM HSPERFTEST) LOOP DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 1, rec.owner); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 2, rec.object_name); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 3, rec.SUBOBJECT_NAME); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 4, rec.OBJECT_ID); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 5, rec.DATA_OBJECT_ID); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 6, rec.OBJECT_TYPE); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 7, rec.CREATED); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 8, rec.LAST_DDL_TIME); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 9, rec.TIMESTAMP); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 10, rec.STATUS); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 11, rec.TEMPORARY); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 12, rec.GENERATED); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 13, rec.SECONDARY); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 14, rec.NAMESPACE); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 15, rec.EDITION_NAME); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 16, rec.ID); nr := DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@hsdb (p_c); END LOOP; DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@hsdb (p_c); COMMIT; END; / PL/SQL procedure successfully completed. Elapsed: 00:07:37.77
That is a huge difference in execution time! And with DBMS_HS_PASSTHROUGH, the HSODBC process CPU usage is much less, making it more scalable running in parallel.
Now, how to run it in parallel?
Since the destination database is non-Oracle, the database built-in parallel execution cannot be used, so “DIY parallelism” is needed: split/partition the input data with some kind of criteria and start a new process for each partition.
In the real project I used each records CREATED_DATE value for partitioning, because underlying table was already partitioned by that column, but its also possible to use table ROWID values for partitioning. A good example of it is on book Expert Oracle database architecture By Thomas Kyte, chapter 14, Parallel Execution, Do-It-Yourself Parallelism.
Here I’m going to split the table just into 3 pieces to make it simple. To get the maximum IO performance from Oracle, Tom Kytes example needs to be followed. But I’ll make my example just simple.
Table hsperftest_job will keep the metadata for every parallel slave.
CREATE TABLE hsperftest_job ( job_no NUMBER NOT NULL, low_id ROWID NOT NULL, hi_id ROWID NOT NULL, max_in NUMBER NOT NULL, start_date DATE, end_date DATE, rows_processed NUMBER ); CREATE SEQUENCE seq_hsperftest_job; DECLARE p_min rowid; p_max rowid; p_row1 rowid; p_row2 rowid; p_count number; BEGIN SELECT MIN(rowid), MAX(rowid), COUNT(*) INTO p_min, p_max, p_count FROM hsperftest; SELECT MAX(a) INTO p_row1 FROM ( SELECT rowid a FROM hsperftest WHERE rownum < CEIL(p_count/3) ); SELECT MAX(a) INTO p_row2 FROM ( SELECT rowid a, rownum rn FROM hsperftest WHERE rowid > p_row1 ) WHERE rn < CEIL(p_count/3); INSERT INTO hsperftest_job (job_no, low_id, hi_id, max_in) VALUES (seq_hsperftest_job.nextval, p_min, p_row1, 0); INSERT INTO hsperftest_job (job_no, low_id, hi_id, max_in) VALUES (seq_hsperftest_job.nextval, p_row1, p_row2, 0); INSERT INTO hsperftest_job (job_no, low_id, hi_id, max_in) VALUES (seq_hsperftest_job.nextval, p_row2, p_max, 1); COMMIT; END; / SELECT j.job_no, COUNT (1) FROM hsperftest_job j, hsperftest h WHERE h.ROWID BETWEEN j.low_id AND j.hi_id GROUP BY j.job_no; JOB_NO COUNT(1) ---------- ---------- 1 59313 2 59314 3 59317
Looks like my very simple ROWID hack is good enough for this test.
Now I'll create a procedure that will be executed in parallel.
CREATE OR REPLACE PROCEDURE hsperftest_proc(p_job_no hsperftest_job.job_no%TYPE) IS p_min ROWID; p_max ROWID; p_max_in NUMBER; p_c BINARY_INTEGER; nr INTEGER; p_count NUMBER:= 0; BEGIN -- Read the job metadata SELECT low_id, hi_id, max_in INTO p_min, p_max, p_max_in FROM hsperftest_job WHERE job_no = p_job_no; UPDATE hsperftest_job SET start_date = sysdate WHERE job_no = p_job_no; COMMIT; -- Do the work p_c := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@hsdb; DBMS_HS_PASSTHROUGH.PARSE@hsdb ( p_c, 'insert into hsperftest (OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS, TEMPORARY,GENERATED,SECONDARY,NAMESPACE,EDITION_NAME,ID) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)' ); FOR rec IN (SELECT * FROM HSPERFTEST WHERE rowid BETWEEN p_min AND p_max AND (p_max_in = 1 OR (p_max_in = 0 AND rowid < p_max))) LOOP DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 1, rec.owner); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 2, rec.object_name); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 3, rec.SUBOBJECT_NAME); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 4, rec.OBJECT_ID); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 5, rec.DATA_OBJECT_ID); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 6, rec.OBJECT_TYPE); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 7, rec.CREATED); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 8, rec.LAST_DDL_TIME); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 9, rec.TIMESTAMP); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 10, rec.STATUS); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 11, rec.TEMPORARY); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 12, rec.GENERATED); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 13, rec.SECONDARY); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 14, rec.NAMESPACE); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 15, rec.EDITION_NAME); DBMS_HS_PASSTHROUGH.BIND_VARIABLE@hsdb (p_c, 16, rec.ID); nr := DBMS_HS_PASSTHROUGH.EXECUTE_NON_QUERY@hsdb (p_c); p_count:= p_count + 1; END LOOP; DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@hsdb (p_c); COMMIT; -- Write the job end time UPDATE hsperftest_job SET end_date = sysdate, rows_processed = p_count WHERE job_no = p_job_no; COMMIT; END; /
And finally I'll execute the jobs by putting them on the job queue.
DECLARE x PLS_INTEGER; BEGIN FOR rec IN (SELECT job_no FROM hsperftest_job) LOOP DBMS_JOB.submit (job => x, what => 'hsperftest_proc(' || rec.job_no || ');'); END LOOP; COMMIT; END; / SQL> SELECT job, total_time, broken, what FROM user_jobs; JOB TOTAL_TIME B WHAT ---------- ---------- - ----------------- 10967 95 N hsperftest_proc(1); 10968 95 N hsperftest_proc(2); 10969 95 N hsperftest_proc(3);
When the jobs are running, monitor the HSODBC processes CPU and memory usage. If there are too much parallel slaves running, then they may become a bottleneck.
If HSODBC memory usage becomes a problem, then partition the source data into smaller chunks.
When everything is done, look at the results.
SQL> SELECT SUM(rows_processed) total_rows, (MAX(end_date)-MIN(start_date))*24*60 runtime_min FROM hsperftest_job; TOTAL_ROWS RUNTIME_MIN ---------- ----------- 177942 3.58333333
3.58 min = 3 min 34.8 sec. Nice 🙂
- Written by: ilmarkerm
- Category: Blog entry
- Published: July 22, 2009
I have attended some APEX presentations/seminars and one question seems to be repeating and got my attention. Can APEX run on another database?
Well, APEX itself or APEX application itself cannot run in any way on another database besides Oracle. APEX is PL/SQL application and no other database supports Oracle PL/SQL language.
But it’s possible to select and modify data from another database from your APEX application.
First of all you need to set up a database link to the other database using Oracle Heterogeneous Services/Transparent Gateways. I’m not going to get into that right now, please use documentation/google for that.
I’m going to name my database link HSDB:
create database link hsdb connect to "lowercaseusername" identified by "randomCasepaSSword" using 'MYSQL_DB_SERVICE';
I’m going to use MySQL database and I’ll create table employees on MySQL database:
CREATE TABLE `employees` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(100) NOT NULL, `last_name` varchar(100) NOT NULL, `personal_code` varchar(20) NOT NULL, `birthday` date DEFAULT NULL, `salary` decimal(10,2) NOT NULL, `is_active` tinyint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Test that table employees is accessible from Oracle:
select * from "employees"@hsdb;
NB! I created the table in MySQL with a lower case table name, but Oracle uses upper case identifiers by default, unless you force the correct case with double-quotes.
One other thing to look out for: check that ODBC/Oracle do not translate any of the columns to the deprecated LONG datatype. LONG columns seems to be used for long/unlimited text fields (text in MySQL, text in PostgreSQL). You can check it by running DESC command from SQLPLUS.
desc "employees"@hsdb Name Null? Type -------------------------- -------- ----------------- id NUMBER(10) first_name NOT NULL VARCHAR2(100) last_name NOT NULL VARCHAR2(100) personal_code NOT NULL VARCHAR2(20) birthday DATE salary NOT NULL NUMBER(10,2) is_active NOT NULL NUMBER(3)
Make sure, that none of the data types is LONG. If you have LONG column, then you need to modify the data type from the source database or create a “Oracle friendly” view on top of it.
In PostgreSQL ODBC driver its also possible to modify the maximum VARCHAR length before its mapped to LONG.
Anyway, this is the most critical part in my opinion. Make sure that you are happy with the data type mappings before continuing!
Now create an Oracle view on top that MySQL table. This view will translate all column names to upper case and will simplify writing the SQL code.
CREATE VIEW EMPLOYEES_MYSQL AS SELECT "id" id, "first_name" first_name, "last_name" last_name, "personal_code" personal_code, "birthday" birthday, "salary" salary, "is_active" is_active FROM "employees"@hsdb; desc employees_mysql Name Null? Type ----------------------- -------- ---------------- ID NUMBER(10) FIRST_NAME NOT NULL VARCHAR2(300) LAST_NAME NOT NULL VARCHAR2(300) PERSONAL_CODE NOT NULL VARCHAR2(60) BIRTHDAY DATE SALARY NOT NULL NUMBER(10,2) IS_ACTIVE NOT NULL NUMBER(3)
Simple report
Creating a simple report is very easy, just use the created EMPLOYEES_MYSQL view in APEX like any other normal Oracle table/view.
Complex report
Writing vanilla-SQL against non-Oracle table over database link usually works, when the SQL is really simple. But there are pretty serious limitations and in some cases the performance “may not be very good” (indexes not used for some data type conversions).
To solve that problem, Oracle has a package DBMS_HS_PASSTHROUGH, that sends unmodified native SQL query to the linked database.
For this example I’ll use the following native MySQL SQL statement for a report:
select sql_cache id, first_name, last_name from employees limit 10
To be able to use this result in a SQL statement, I’m going to use PIPELINED function and for that I first need to create TYPE, that will define the structure of the query output. And after that I’ll create the actual function that will query the remote database.
create or replace type mysql_hstest_type as object ( id number, first_name varchar2(250), last_name varchar2(250) ); / create or replace type tab_mysql_hstest_type is table of mysql_hstest_type; / create or replace FUNCTION mysql_hstest_query RETURN tab_mysql_hstest_type PIPELINED IS p_row mysql_hstest_type:= mysql_hstest_type(null, null, null); p_c binary_integer; BEGIN p_c:= DBMS_HS_PASSTHROUGH.OPEN_CURSOR@hsdb; DBMS_HS_PASSTHROUGH.PARSE@hsdb(p_c, 'select sql_cache id, first_name, last_name from employees limit 10'); WHILE DBMS_HS_PASSTHROUGH.FETCH_ROW@hsdb(p_c) > 0 LOOP DBMS_HS_PASSTHROUGH.GET_VALUE@hsdb(p_c, 1, p_row.id); DBMS_HS_PASSTHROUGH.GET_VALUE@hsdb(p_c, 2, p_row.first_name); DBMS_HS_PASSTHROUGH.GET_VALUE@hsdb(p_c, 3, p_row.last_name); PIPE ROW(p_row); END LOOP; DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@hsdb(p_c); RETURN; END; /
Now, to use it in an APEX form just use the following SQL in report:
select * from table(mysql_hstest_query)
Form
First create a form on a view EMPLOYEES_MYSQL with the wizard like for any normal table. This will just create all the necessary page elements quickly.
Its necessary to write optimistic locking feature, because “Automatic Row Processing (DML)” process cannot be used and optimistic locking is a “silent” built in feature of that process. If for some weird reason you do not want the optimistic locking feature, then you can skip the checksum and validation steps.
For checksumming I’ll create hashing function (this one needs execute privileges on DBMS_CRYPTO):
CREATE OR REPLACE FUNCTION form_md5_checksum( p1 varchar2 default null, p2 varchar2 default null, p3 varchar2 default null, p4 varchar2 default null, p5 varchar2 default null, p6 varchar2 default null, p7 varchar2 default null, p8 varchar2 default null, p9 varchar2 default null, p10 varchar2 default null) RETURN varchar2 DETERMINISTIC IS BEGIN RETURN rawtohex(DBMS_crypto.hash(UTL_RAW.CAST_TO_RAW( p1||'|'||p2||'|'||p3||'|'||p4||'|'|| p5||'|'||p6||'|'||p7||'|'||p8||'|'|| p9||'|'||p10), DBMS_CRYPTO.HASH_MD5)); END; /
Now create a new hidden and protected item in the APEX page, I’ll call it P3_CHECKSUM.
Then create a PL/SQL anonymous block process:
Name: Calculate checksum Sequence: 20 (just after Automated Row Fetch) Process Point: On Load - After Header Process:
BEGIN :p3_checksum:= form_md5_checksum( :p3_ID, :p3_FIRST_NAME, :p3_LAST_NAME, :p3_PERSONAL_CODE, :p3_BIRTHDAY, :p3_SALARY, :p3_IS_ACTIVE); END;
Then I removed “Database Action” from the form buttons and changed the Button Name (the page submit REQUEST value):
Delete - DELETE Apply Changes - UPDATE Create - INSERT
The default “Automatic Row Processing (DML)” process cannot be used for saving the chages back to the database, because the ODBC database/driver lacks the support for SELECT FOR UPDATE. Because of it, delete the existing “Automatic Row Processing (DML)” process.
To save the changes, a new procedure is needed:
CREATE OR REPLACE PROCEDURE modify_employees_mysql( p_action IN VARCHAR2, p_row IN employees_mysql%rowtype, p_md5 varchar2) IS PRAGMA AUTONOMOUS_TRANSACTION; p_new_md5 varchar2(50); p_new_row employees_mysql%rowtype; BEGIN -- Calculate checksum IF p_action IN ('UPDATE','DELETE') AND p_row.id IS NOT NULL THEN -- Lock the row UPDATE employees_mysql SET first_name = first_name WHERE id = p_row.id; -- Calculate new checksum SELECT * INTO p_new_row FROM employees_mysql WHERE id = p_row.id; p_new_md5:= form_md5_checksum(p_new_row.ID, p_new_row.FIRST_NAME, p_new_row.LAST_NAME, p_new_row.PERSONAL_CODE, p_new_row.BIRTHDAY, p_new_row.SALARY, p_new_row.IS_ACTIVE); -- Check if the checksum has changed IF NVL(p_new_md5, '-') <> NVL(p_md5, '-') THEN ROLLBACK; raise_application_error(-20000, 'Data has changed'); END IF; -- END IF; -- Do the data modifications IF p_action = 'INSERT' THEN INSERT INTO employees_mysql VALUES p_row; ELSIF p_action = 'UPDATE' AND p_row.id IS NOT NULL THEN UPDATE employees_mysql SET ROW = p_row WHERE id = p_row.id; ELSIF p_action = 'DELETE' AND p_row.id IS NOT NULL THEN DELETE FROM employees_mysql WHERE id = p_row.id; ELSE raise_application_error(-20099, 'Invalid action.'); END IF; commit; END; /
Note the “PRAGMA AUTONOMOUS_TRANSACTION” in the above code. I used the default open source MySQL ODBC driver that lacks the support for 2PC (Two Phase Commit). The symptom for this “ORA-02047: cannot join the distributed transaction in progress” when running the procedure inside APEX transaction.
If you are using some commercial ODBC driver with 2PC support or drivers supplied by Oracle HS or Oracle Transparent Gateways, then you don’t need autonomous transaction for this procedure and you also need to remove commit/rollback statements from the procedure.
And finally put this procedure to the APEX page flow.
Create a new PL/SQL anonymous block process:
Name: Save changes Sequence: 30 Process Point: On Submit - After Computations and Validations Process:
DECLARE p_row employees_mysql%rowtype; BEGIN p_row.id:= :P3_ID; p_row.first_name:= :P3_FIRST_NAME; p_row.last_name:= :P3_LAST_NAME; p_row.personal_code:= :P3_PERSONAL_CODE; p_row.birthday:= :P3_BIRTHDAY; p_row.salary:= :P3_SALARY; p_row.is_active:= :P3_IS_ACTIVE; modify_employees_mysql(:REQUEST, p_row, :p3_checksum); END;
And now you have it – APEX running on another database 🙂