Friday, March 3, 2017

CONNECT BY is dead, long live CTE! In MariaDB 10.2!

Yes, you got that right, the old CONNECT BY as used by recursive SQL with Oracle has been replaced by Common Table Expressions, or the WITH statement in SQL:1999 which is now also available in MariaDB 10.2. Now, the SQL WITH construct, using Common Table Expressions or CTE,  is useful for other things than just recursive queries, but this is the one feature that WITH enables that was previously very hard to do without some procedural code, the non-recursive use of Common Table Expressions could previously mostly be replaced by using temporary tables.

This blog post will explain what recursive SQL is all about and why this is useful, and I will show some examples of both CONNECT BY and how the same SQL is written using the WITH clause.

The most common example for recursive SQL is probably for doing a parts explosion, where we have a table of parts of some component where each part is either a main, top level, part or is a part of another part. For example a car with an engine, where the engine consists of pistons, cylinders and a camshaft, where the latter also includes some camshaft bearings. I think you get the basic idea here. To query this data to create a list of components that make up some other component, you need to recursively visit the data, i.e.. each row is evaluated using conditions from any other row already fetched, except the first row fetched that is.

Now, let's look at some data first. I assume we have two tables here, one table that contains information on the different parts and then one table that contains information on the individual parts and then one table that contains the hierarchy of the parts, called components. Like this:
CREATE TABLE parts(part_id INTEGER NOT NULL PRIMARY KEY,
  part_name VARCHAR(60) NOT NULL);

CREATE TABLE components(comp_id INTEGER NOT NULL PRIMARY KEY,
  comp_name VARCHAR(60),
  comp_count INTEGER NOT NULL,
  comp_part INTEGER NOT NULL,
  comp_partof INTEGER,
  FOREIGN KEY(comp_part) REFERENCES parts(part_id));
ALTER TABLE components ADD FOREIGN KEY(comp_partof) REFERENCES components(comp_id);


The two things to note here is that the components table has a column, comp_partof, that implements the hierarchy and that there is a self-referencing FOREIGN KEY constraint on this table. Given these tables, assuming that we are a small privately held car-manufacturing company in southern Germany, let's insert some data:
INSERT INTO parts VALUES(1, 'Car');
INSERT INTO parts VALUES(2, 'Bolt');
INSERT INTO parts VALUES(3, 'Nut');
INSERT INTO parts VALUES(4, 'V8 engine');
INSERT INTO parts VALUES(5, '6-cylinder engine');
INSERT INTO parts VALUES(6, '4-cylinder engine');
INSERT INTO parts VALUES(7, 'Cylinder block');
INSERT INTO parts VALUES(8, 'Cylinder');
INSERT INTO parts VALUES(9, 'Piston');
INSERT INTO parts VALUES(10, 'Camshaft');
INSERT INTO parts VALUES(11, 'Camshaft bearings');
INSERT INTO parts VALUES(12, 'Body');
INSERT INTO parts VALUES(13, 'Gearbox');
INSERT INTO parts VALUES(14, 'Chassie');
INSERT INTO parts VALUES(15, 'Rear axle');
INSERT INTO parts VALUES(16, 'Rear break');
INSERT INTO parts VALUES(17, 'Wheel');
INSERT INTO parts VALUES(18, 'Wheel bolts');

INSERT INTO components VALUES(1, '320', 1, 1, NULL);
INSERT INTO components VALUES(2, NULL, 1, 6, 1);
INSERT INTO components VALUES(3, NULL, 1, 7, 2);
INSERT INTO components VALUES(4, NULL, 4, 8, 3);
INSERT INTO components VALUES(5, NULL, 4, 9, 3);
INSERT INTO components VALUES(6, NULL, 1, 10, 3);
INSERT INTO components VALUES(7, NULL, 3, 11, 6);
INSERT INTO components VALUES(8, NULL, 1, 12, 1);
INSERT INTO components VALUES(9, NULL, 1, 14, 1);
INSERT INTO components VALUES(10, NULL, 1, 15, 9);
INSERT INTO components VALUES(11, NULL, 2, 16, 10);


INSERT INTO components VALUES(12, '323 i', 1, 1, NULL);
INSERT INTO components VALUES(13, NULL, 1, 5, 12);

If you are not into mechanics, let me tell you that there are more parts than this to a car, for example I left out a few critical components, such as the cupholder, the dog that stands on the pickup cargo area and the insulting bumber-sticker, but I think you get the idea. Note that there are two "main" components, the '320' and '323 i' and that these are top level components are indicated by the comp_partof column being set to NULL.

Now, assume you want to list all the parts that make up a 320. The way this works when using the CONNECT BY syntax, you compose one single SQL statement and provide a CONNECT BY clause to indicate the relationship. Like this:
SELECT LPAD('-', level, '-')||'>' level_text, comp_count, NVL(comp_name, part_name) name
FROM components c, parts p
WHERE c.comp_part = p.part_id
START WITH c.comp_name = '320'
CONNECT BY PRIOR c.comp_id = c.comp_partof;


Let me explain this a bit, but there is nothing really magic here. We are selecting from the two tables and joining them just as usual. Then we use the START WITH clause to define the top level component and then the rest of the components are have a comp_partof of a component that matches the comp_id of the START WITH component or a  comp_id of any other component that has been fetched.
This way of writing recursive SQL has some advantages, such as it is relatively compact and is easy to understand. The disadvantage is that there are some quirks and limitation to this and that once your queries gets more complex, CONNECT BY gets a bit hairy. One sure sign that CONNECT BY is going away, even though I and many others tend to like it because of the ease of use, is that even Oracle, as of Oracle 11g, also has implemented the WITH construct, or Common Table Expressions or CTE. So looking at the above statement how this would work in MariaDB 10.2, this is what it would look like using the WITH construct:
WITH RECURSIVE comp(comp_id, comp_name, comp_partof, comp_count) AS (
  SELECT comp_id, comp_name, comp_partof, comp_count
    FROM components JOIN parts ON comp_part = part_id
    WHERE comp_partof IS NULL AND comp_name = '320'
  UNION ALL
  SELECT c1.comp_id, p.part_name, c1.comp_partof, c1.comp_count
  FROM components c1 JOIN parts p ON c1.comp_part = p.part_id
    JOIN comp c2 ON c1.comp_partof = c2.comp_id)
SELECT comp_count, comp_name FROM comp;


Comparing this CTE version to the CONNECT BY version as above, this is a bit more complex, but how it works is actually pretty clear once you look at it carefully. To begin with, the top level item or anchor is the first SELECT in the UNION ALL and the following components are fetched using the second SELECT. Then the recursive aspect is handled by this UNION being run until there are no more rows returned from it? As you can see, although this requires more text and more complex SQL to write, it is also a fair bit more flexible. For example, the anchor point is defined by a completely separate SELECT which means it can be whatever SELECT you want, selecting from any odd table. Secondly, the column you use and the conditions for defining the hierarchy can be as complex as you want. And thirdly, there is also the power of that last SELECT which in the case above just gets the data from the UNION, but you can actually apply any kind of filter, ordering or column filter to this query. The result of the query above is this:
comp_count      comp_name
1               320
1               4-cylinder engine
1               Body
1               Chassie
1               Cylinder block
1      
        Rear axle
4      
         Cylinder
4
               Piston
1      
        Camshaft
2
               Rear break
3      
        Camshaft bearings

Before I finish this off, the WITH RECURSIVE statement is somewhat overly complex, in MariaDB 10.2 you can for example skip listing the column names of the recursive table, like this:
WITH RECURSIVE comp AS (
  SELECT comp_id, comp_name, comp_partof, comp_count
    FROM components JOIN parts ON comp_part = part_id
    WHERE comp_partof IS NULL AND comp_name = '320'
  UNION ALL
  SELECT c1.comp_id, p.part_name, c1.comp_partof, c1.comp_count
  FROM components c1 JOIN parts p ON c1.comp_part = p.part_id
    JOIN comp c2 ON c1.comp_partof = c2.comp_id)
SELECT comp_count, comp_name FROM comp;


And although Oracle 11 and up supports the CTEs, it works a bit differently. For one thing, the RECURSIVE keyword isn't supported (it is assumed to be recursive by default) and the way I read the SQL standard, this is actually wrong, for recursive queries you have to use the RECURSIVE keyword. Second, Oracle does require the SELECT-list. So in Oracle, you would see something like this:
WITH comp(comp_id, comp_name, comp_partof, comp_count) AS (
  SELECT comp_id, comp_name, comp_partof, comp_count
    FROM components JOIN parts ON comp_part = part_id
    WHERE comp_partof IS NULL AND comp_name = '320'
  UNION ALL
  SELECT c1.comp_id, p.part_name, c1.comp_partof, c1.comp_count
  FROM components c1 JOIN parts p ON c1.comp_part = p.part_id
    JOIN comp c2 ON c1.comp_partof = c2.comp_id)
SELECT comp_count, comp_name FROM comp;

Yes, we are all happily following the same SQL standard. Somewhat...
See the MariaDB Knowledge Base for more information on common table expressions.

Happy SQL'ing
/Karlsson

Tuesday, February 28, 2017

JSON with MariaDB 10.2

JSON is fast becoming the standard format for data interchange and for unstructured data, and MariaDB 10.2 adds a range on JSON supporting functions, even though a JSON datatype isn't implemented yet. There are some reasons why there isn't a JSON datatype, but one is that there are actually not that many advantages to that as JSON is a text-based format. This blog post aims to describe JSON and the use cases for it, as well as to describe the MariaDB 10.2 JSON functions and uses for these, as well as showing some other additions to MariaDB 10.2 that are useful for JSON processing.

So to begin with then, why do we need JSON? Or to put it differently, why do we not store all data in JSON? Well, the reason as I see it is that some data we work with really is best treated as schemaless whereas some other data really should be handled in a more strict way in a schema. Which means that in my mind mixing relational data with unstructured data is what we really want. And using JSON for unstructured data is rather neat, and JSON is even standardized (see json.org).

There are reasons why this hasn't always been so.  When the sad old git that is writing this stuff started working in this industry, which I think was during the Harding administration, computers were rare, expensive, handled only by experts (so how I got to work with them is a mystery) and built from lego-bricks, meccano and pieces of solid gold (to keep up the price). Also, they were as powerful as a slide-ruler, except it was feed with punched-cards (and probably powered by steam). Anyway, no one in their right mind would have considered string pictures of cute felines as something to be on a computer, or actually stuff to be stored in the database. The little that would fit was the really important stuff - like price, amount in stock, customer name, billing address and such - and nothing else.  And not only that, stuff that was stored had some kind of value, somehow, which meant it had to follow certain rules (and following rules is something I am good at? I wonder how I ended up in this business. Again). Like, a price had to be a number of some kind, with a value 0 or higher and some other restrictions. As you see, these were hard and relentless times.

And then time moved on and people started buying things on the internet (whatever the internet is. I think it is some kind of glorified, stylish version of punched cards) and stuff such as Facebook and Google came around. The issue with computer storage was now not how to fit all that nicely structured data in it, but rather once we have filled that hard drive on your desktop with all the product, customers and transactions from Amazon (I think Amazon has something to do with Internet, but I am not sure) and a full 17.85% of that drive is now occupied by that old-style structured data, what more do we put in there? Maybe we could put some more information on the items for sale in that database, and some general information on who is buying it? That should fill up that disk nicely, right? Well, yes, but that new data, although related to the structured data I already have, is largely unstructured. Say, for example, you write a review of a product on Amazon late in the morning after a good deal of heavy "partying" (which is not an Internet thing, I think), the contents of that would hardly be considered "structured". If you didn't like the product (which you probably didn't), then the appropriate terms for large parts of that review would probably be "profanity" or "foul language").

The way to deal with the above is a mix of structured and unstructured data, with some kind of relation between the two. Like a column of unstructured data in each relational database table (or should I say "relation", just to show my age? Or maybe I should pretend to be really young, modern and cool, possibly sporting a hipster beard and all that, by calling it a "collection").

With that out of the way, let's consider an example using structured as well as non-structured JSON data. Assume we have a store selling different types of clothing, pants, jackets, shoes and we are to create a table to hold the inventory. This table would have some columns that are always there and which have the same meaning for all rows in the table, like name, amount in stock and price. These are items that are well suited for a relational format. On top of this we want to add attributes that have different meaning for each type of or even each instance of items. Here we have things like colour, width, length and size. These we consider non-relational as the interpretation of these attributes are different depending of the type of garment (like size M or shoe sizes or a "zebra striped" colour) and some garments might have some unique attribute, like designer or recommended by staff or something. Our table might then look something like this:
MariaDB> CREATE TABLE products(id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
  name VARCHAR(255) NOT NULL,
  price DECIMAL(9,2) NOT NULL,
  stock INTEGER NOT NULL,
  attr VARCHAR(1024));

In this table we have a few columns that look like columns in any relational database table, and then we have a column, called attr, that can hold any relevant attribute for the garment in question and we will store that as JSON a JSON string. You probably notice that we aren't using a JSON datatype here as that is not present in MariaDB, despite that there are JSON functions, but those JSON functions act on a text-string with JSON content. These functions are introduced in MariaDB 10.2 (which is in Beta as I write this), but there are a few bugs that means you should use MariaDB 10.2.4 or higher, which means as for now we assume that MariaDB 10.2.4 or higher is being used.

But there is one issue with the above that I don't particularly care for and that is, as the attr column is plain text, any kind of data can be put in the attr column, even non-valid JSON. The good thing is that there is a fix for this in MariaDB 10.2, which is CHECK constraints that actually work, and this is a little discussed feature of MariaDB 10.2. The way this works is that this kind of constraint kicks in whenever a row is INSERTed or UPDATEed, any CHECK constraint runs and validates the data and if the validation fails the operation also fails. Before I show an example I just want to mention one JSON function we are to use here, which is JSON_VALID which takes a string and checks if it is valid JSON. Note that although CHECK constraints are particularly valid here, check constraints can be used for any kind of data validation.

Armed with this, let's rewrite the statement that creates the table like this:
MariaDB> CREATE TABLE products(id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
  name VARCHAR(255) NOT NULL,
  price DECIMAL(9,2) NOT NULL,
  stock INTEGER NOT NULL,
  attr VARCHAR(1024),
  CHECK (JSON_VALID(attr)));


Let's give this a try now:
MariaDB> INSERT INTO products VALUES(NULL, 'Jeans', 10.5, 165, NULL);
ERROR 4025 (23000): CONSTRAINT `CONSTRAINT_1` failed for `inventory`.`products`

Ok, that didn't work out. What happens here is that a NULL string isn't a valid JSON value, so we need to rewrite our table definition:
MariaDB> CREATE TABLE products(id INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT,
  name VARCHAR(255) NOT NULL,
  price DECIMAL(9,2) NOT NULL,
  stock INTEGER NOT NULL,
  attr VARCHAR(1024),
  CHECK (attr IS NULL OR JSON_VALID(attr)));

Following this we can try it again:
MariaDB> INSERT INTO products VALUES(NULL, 'Jeans', 10.5, 165, NULL);
Query OK, 1 row affected (0.01 sec)
MariaDB> INSERT INTO products VALUES(NULL, 'Shirt', 10.5, 78, '{"size": 42, "colour": "white"}');
Query OK, 1 row affected (0.01 sec)
MariaDB> INSERT INTO products VALUES(NULL, 'Blouse', 17, 15, '{"colour": "white}');
ERROR 4025 (23000): CONSTRAINT `CONSTRAINT_1` failed for `inventory`.`products`

That last statement failed because of malformed JSON (a double quote was forgotten about), so let's correct that:
MariaDB> INSERT INTO products VALUES(NULL, 'Blouse', 17, 15, '{"colour": "white"}');
Query OK, 1 row affected (0.01 sec)

One thing that has yet to be discussed is indexes on JSON values. As the attr column in our example is a plain text, we can of course index it as usual, but that is probably not what you want to do, rather what would be neat would be to create an index on individual attributes in that JSON string. MariaDB doesn't yet support functional indexes, i.e. functions not on values but on computed values. What MariaDB does have though is Virtual Columns, and these can be indexed and as of MariaDB 10.2 these virtual columns don't have to be persistent, (read more on Virtual Columns here: Putting Virtual Columns to good use).

The easiest way to explain this is with an example. Let's say we want an index on the colour attribute, if such a thing exists. For this we need two things: A virtual column that contains the colour attribute as extracted from the attr column, and then an index on that. In this case we will be using the JSON_VALUE function that takes a JSON value and a path, the latter describing the JSON operation to be performed, somewhat like a query language for JSON.

We end up with something like this:
MariaDB> ALTER TABLE products ADD attr_colour VARCHAR(32) AS (JSON_VALUE(attr, '$.colour'));
MariaDB> CREATE INDEX products_attr_colour_ix ON products(attr_colour);

With that in place, let's see how that works:
MariaDB> SELECT * FROM products WHERE attr_colour = 'white';
+----+--------+-------+-------+---------------------------------+-------------+
| id | name   | price | stock | attr                            | attr_colour |
+----+--------+-------+-------+---------------------------------+-------------+
|  2 | Shirt  | 10.50 |    78 | {"size": 42, "colour": "white"} | white       |
|  3 | Blouse | 17.00 |    15 | {"colour": "white"}             | white       |
+----+--------+-------+-------+---------------------------------+-------------+
2 rows in set (0.00 sec)

And let's see if that index is working as it should:
MariaDB> EXPLAIN SELECT * FROM products WHERE attr_colour = 'white';
+------+-------------+----------+------+-------------------------+-------------------------+---------+-------+------+------------+
| id   | select_type | table    | type | possible_keys           | key                     | key_len | ref   | rows | Etra       |
+------+-------------+----------+------+-------------------------+-------------------------+---------+-------+------+------------+
|    1 | SIMPLE      | products | ref  | products_attr_colour_ix | products_attr_colour_ix | 99      | const |    2 | Uing where |
+------+-------------+----------+------+-------------------------+-------------------------+---------+-------+------+------------+
1 row in set (0.00 sec)


And just to show that the column attr_colour is a computed column that depends on the attr column, lets try updating the colour for the blouse and make that red instead of white and then search that. To replace a value in a JSON object MariaDB 10.2 provides the JSON_REPLACE functions (for all JSON functions in MariaDB 10.2 see MariaDB Knowledge Base).
MariaDB> UPDATE products SET attr = JSON_REPLACE(attr, '$.colour', 'red') WHERE name = 'Blouse';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0
MariaDB> SELECT attr_colour FROM products WHERE name = 'blouse';
+-------------+
| attr_colour |
+-------------+
| red         |
+-------------+
1 row in set (0.00 sec)


There is more to say about JSON in MariaDB 10.2 but I hope you now have a feel on what's for offer.

Happy SQL'ing
/Karlsson

Monday, January 16, 2017

Released MyMSSQLDump 1.1

My program for exporting data from MSSQL and Sybase into a whole bunch of other formats, including:
  • JSON
  • HTML
  • CSV
  • MySQL (mysqldump style)
  • MYSQL / Sybase INSERT-style
  • Oracle INSERT-style
is now released in version 1.1. There is a whole bunch of new things, most notable Oracle style export format, but also:
  • DATETIME datatype formatting
  • DATETIMEOFFSET formatting
  • Other temporal datatype support
  • Much more flexible formatting in general
  • More tests
As usual this is GPL v2 licenced and is available to download from sourceforge.

Happy SQL'ing
/Karlsson

Thursday, January 5, 2017

Powerful MariaDB exports using MyMDBDump

You can export data from MariaDB using mysqldump and a bunch of other tools, but if you need really flexible output format, this might not be what you need. Instead, give MyMDBDump a try. This tool will export data just like mysqldump, but the output format is a lot more flexible. I urge you to test it and check out the documentation, but some if the features are, in short:
  • Dynamic column support - Dynamic columns can be exported as binary, JSON or even as SQL, where I think the latter is a pretty unique feature.
  • Oracle export format - This exports data as INSERT statements, just like mysqldump, but in an Oracle friendly way.
  • MS SQL / SQL Server format - This exports data as SQL Server friendly INSERTs.
  • Binary data formats - Supporting plain binary, base64, hex and a number of more formats.
  • Transaction support.
  • JSON export support - Including embedding dynamic columns as JSON.
  • UTF8 support and UTF8 validity checking.
  • Generated ROWNUM column output
  • FLOAT and DOUBLE formatting
  • DATETIME, TIMESTAMP, DATE and TIME custom formating.
And a lot more. Download from sourceforge.

Happy SQL'ing
/Karlsson

Tuesday, October 11, 2016

Getting data out of SQL Server in a MariaDB friendly way.

I know what you are thinking "Oh my, it's that time again, Karlsson insist that data should get out of some other database and into MariaDB. What is he up to this time?" and that is exactly right, this time data is coming out of SQL Server, out of that expensive, closed source, Windows-only software and into the lightweight, fast and cost effective MariaDB.

I have already shown how this works when getting data out of Oracle in releasing MyOraDump, so inline with that the corresponding program to get data out of SQL Server is called MyMSSQLDump and it has features along same same lines as MyOraDump. To connect to SQL Server (or Sybase, but this I haven't tested) I use FreeTDS, which is a nice Open Source SQL Server / Sybase driver. The output formats supported are MySQL (i.e. A file with (INSERT INTO...), MSSQL (same as MySQL format but aligned for MSSQL), JSON, JSON Array (similar to JSON but all data in one array) and CSV. All formats are very flexible and the common SQL Server / Sybase datatypes are supported.

As usual, the code is written in C and use autotools for building. I have only tested building it on CentOS (6 and 7) so far, but there is nothing magic to it so it should work on other platforms too. Download it from sourceforge.

Happy SQL'ing
/Karlsson

Monday, September 19, 2016

Replication from Oracle to MariaDB - Part 3

This is the third part of a series of blogs on how to do Change Data Capture from Oracle in an effort to be able to replicate that data to MariaDB. Part 1 dealt with some basics on the Oracle side of things regarding the Oracle redo log which we use as a source for out CDC efforts and Part 2 was about how to get this to work. We left part2 with an example of how to Replicate INSERTs from Oracle to MariaDB, but that is hardly good enough, we also have to deal with UPDATE and DELETE to make it at least somewhat complete, and this is what we will have a look at in this post.

And I know what you are asking your selves now, you spent all that time explaining the Oracle ROWID and then you never used that. That was an awful waste of time. And no, it wasn't a waste, we will put it to good use now when we look at UPDATEs.

For an INSERT, things are reasonably simple, right, we just add a row with some columns to the table. As for an UPDATE or a DELETE though, we need a means to identify the row to be updated. To reiterate one thing, the Oracle redo log is not like the MariaDB binlog here, and here I am assuming that the binlog format is STATEMENT. Let's look at an example, and for that let's insert some data into out EMP table (see the previous port for details).
SQL> INSERT INTO emp VALUES(2, 'John', 'Developer', 12000, 20);
SQL> INSERT INTO emp VALUES(3, 'Georgina', 'Design', 11000, 20);
SQL> INSERT INTO emp VALUES(4, 'Anne', 'Assistant', 8000, 30);
SQL> INSERT INTO emp VALUES(5, 'Marge', 'HR Mgr', 14000, 30);
SQL> COMMIT;
With that in place, let's now ensure that this is also replicated to MariaDB by running the script from the last post:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And to ensure that we got this right, let's check this out in MariaDB now:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+----------+-----------+----------+--------+
| empno | ename    | job       | sal      | deptno |
+-------+----------+-----------+----------+--------+
|     1 | Anders   | Sales     | 10000.00 |     10 |
|     2 | John     | Developer | 12000.00 |     20 |
|     3 | Georgina | Design    | 11000.00 |     20 |
|     4 | Anne     | Assistant |  8000.00 |     30 |
|     5 | Marge    | HR Mgr    | 14000.00 |     30 |
+-------+----------+-----------+----------+--------+
5 rows in set (0.00 sec)
OK, seems like we got it right so far. Then let's try an UPDATE and raise the salary for the two developers in Department 20:
$ sqlplus anders/anders
SQL> UPDATE emp SET sal = sal + 100 WHERE deptno = 20;
Now, what does this result in? If this was MariaDB with STATEMENT binlog format, the binlog would have something like this in it:
# at 9535
#160919 13:00:44 server id 112  end_log_pos 9652 CRC32 0x987f9dea       Query   thread_id=101   exec_time=0     error_code=0
SET TIMESTAMP=1474282844/*!*/;
UPDATE emp SET sal = sal + 100 WHERE deptno = 20
/*!*/;
We've seen this before, no surprises there. Now, let's try this from SQL*Plus and see what the SQL_REDO column in V$LOGMNR_CONTENTS looks like.
update "ANDERS"."EMP" set "SAL" = '12100' where "SAL" = '12000' and ROWID = 'AAAE7RAAEAAAAK9AAB';
update "ANDERS"."EMP" set "SAL" = '11100' where "SAL" = '11000' and ROWID = 'AAAE7RAAEAAAAK+AAA';

Whoa, that doesn't look like our UPDATE at all, what's going on? Well, I already told you that the Oracle redo log doesn't work like the binlog. In this case, we get an SQL statement from it, but it is not the same SQL statement that we asked Oracle to execute for us. No, this SQL is reconstructed from the redo log record, which logs updates row by row, and uses the ROWID to identify the row that was changed. We have already discussed that the ROWID is unique in a table and to an extent even acroess tables. The issue now is we don't have a ROWID in MariaDB. But the ROWID in the redo log is available in the ROW_ID column in the V$LOGMNR_CONTENTS table. And if we assume that all tables that we replicate has to have a PRIMARY KEY, then we could use the ROWID to look up that PRIMARY KEY and use that for UPDATE operations in MariaDB, right? Something like this in our script:
SELECT empno INTO v_empno FROM anders.emp WHERE rowid = v_row_id;
That works but there is one issue with it. We get out data from the redo log, which is historic, so to speak, but we look at the PRIMARY KEY using the ROW ID based on how data looks right now, which means that if you are in the habit of updating your PRIMARY KEYs (don't do this, by the way, this is a bad habit even if we even if we exclude the specific use case we look at here), then this isn't going to fly. Sorry then, I will not fix that for you, if you update a row and then update it's primary key, the first update will fail.

Then let's look at DELETEs. Can we use the same method as used for UPDATEs, it's tempting, I know, but it is not going to work at all. Think about it, we just determined that we get the ROWID for a row that is historic, but in the case of a DELETE there is no "current row" as we DELETEd it? Right? Whoa, how do we solve that? Are we stuck now, out in the desert, the hot Oracle sun is shining on our unprotected pale skin and it's getting hotter and hotter. Our old Studebaker has broken down by the side of the abandoned dirt road, and you stare into the unforgiving sun and you know this is it?

Nope. This can be fixed, luckily! The Oracle redo log contains redo data, that is what we have been using so far. The thing is that it also contains undo data! Hey! And then we think about this amazing concept a bit and consider what is necessary to undo a delete? Frankly, the value of ALL columns in the deleted row, right? Our friendly DBMS_LOGMNR.MINE_VALUE function comes to the rescue, but this time not to operate on the redo value but on the undo value!

Armed with all this, we can now revisit out cdcemp.sql script and add stuff to handle UPDATE and DELETE operations also:
SET serveroutput ON
SET linesize 4000
SET feedback off
DECLARE
   v_scn NUMBER;
   v_scnstart NUMBER;
   v_scnend NUMBER;
   v_redo_value NUMBER;
   v_undo_value NUMBER;
   v_operation VARCHAR2(32);
   v_row_id VARCHAR2(18);
   v_sqlstmt VARCHAR2(4000);
   v_firstcol BOOLEAN;
   v_empno NUMBER;
   CURSOR cur1 IS
     SELECT scn, redo_value, undo_value, operation, row_id
       FROM v$logmnr_contents
       WHERE seg_owner = 'ANDERS' AND table_name = 'EMP'
         AND operation IN('INSERT', 'UPDATE', 'DELETE');
BEGIN
-- Start LogMiner.
   SELECT NVL(MIN(l.scn) + 1, MIN(e.ora_rowscn)) INTO v_scnstart
     FROM anders.emp e, scnlog l
     WHERE l.table_name = 'EMP';
   SELECT MAX(e.ora_rowscn) INTO v_scnend
     FROM anders.emp e;
   DBMS_OUTPUT.PUT_LINE('-- SCN Range: ' || v_scnstart || ' - ' || v_scnend);
   DBMS_LOGMNR.START_LOGMNR(STARTSCN => v_scnstart,
     ENDSCN => v_scnend,
     OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.CONTINUOUS_MINE +
     DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
-- Dummy update to insert a more recent SCN to ensure that we don't have to wait
-- for the next update in the following select.
   UPDATE scnlog SET table_name = table_name WHERE table_name = 'EMP';
   COMMIT;

-- Open cursor to get data from LogMiner.
   OPEN cur1;

-- Loop for all the rows in the redo log since the last time we ran this.
   LOOP
      FETCH cur1 INTO v_scn, v_redo_value, v_undo_value, v_operation, v_row_id;
      EXIT WHEN cur1%NOTFOUND;
      IF v_operation = 'INSERT' THEN
         v_sqlstmt := 'INSERT INTO emp(EMPNO, ENAME, JOB, SAL, DEPTNO) VALUES(';
         v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.EMPNO') || ', ';
         v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.ENAME') || ''', ';
         v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.JOB') || ''', ';
         v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.SAL') || ', ';
         v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.DEPTNO') || ')';
      ELSIF v_operation = 'UPDATE' THEN
         v_sqlstmt := 'UPDATE emp SET ';
         v_firstcol := TRUE;

-- Build UPDATE statement using only column in redo log record.
         IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.EMPNO') = 1 THEN
            v_sqlstmt := v_sqlstmt || 'EMPNO = ' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.EMPNO');
            v_firstcol := FALSE;
         END IF;
         IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.ENAME') = 1 THEN
            v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
            v_sqlstmt := v_sqlstmt || 'ENAME = ''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.ENAME')
              || '''';
            v_firstcol := FALSE;
         END IF;
         IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.JOB') = 1 THEN
            v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
            v_sqlstmt := v_sqlstmt || 'JOB = ''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.JOB')
              || '''';
            v_firstcol := FALSE;
         END IF;
         IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.SAL') = 1 THEN
            v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
            v_sqlstmt := v_sqlstmt || 'SAL = ' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.SAL');
            v_firstcol := FALSE;
         END IF;
         IF DBMS_LOGMNR.COLUMN_PRESENT(v_redo_value, 'ANDERS.EMP.DEPTNO') = 1 THEN
            v_sqlstmt := v_sqlstmt || CASE WHEN NOT v_firstcol THEN ', ' ELSE '' END;
            v_sqlstmt := v_sqlstmt || 'DEPTNO = ' || DBMS_LOGMNR.MINE_VALUE(v_redo_value, 'ANDERS.EMP.DEPTNO');
            v_firstcol := FALSE;
         END IF;

-- Get empno from emp using rowid.
         SELECT MAX(empno) INTO v_empno FROM anders.emp WHERE rowid = v_row_id;
         IF v_empno IS NULL THEN
           CONTINUE;
         END IF;
         v_sqlstmt := v_sqlstmt || ' WHERE empno = ' || v_empno;
      ELSIF v_operation = 'DELETE' THEN
-- Get the empno from the undo record.
         v_empno := DBMS_LOGMNR.MINE_VALUE(v_undo_value, 'ANDERS.EMP.EMPNO');
         IF v_empno IS NULL THEN
           CONTINUE;
         END IF;
         v_sqlstmt := 'DELETE FROM emp WHERE empno = ' || v_empno;
      END IF;

      DBMS_OUTPUT.PUT_LINE('-- SCN = ' || v_scn);
      DBMS_OUTPUT.PUT_LINE(v_sqlstmt || ';');
   END LOOP;
   IF v_scn IS NOT NULL THEN
     UPDATE scnlog SET scn = v_scn WHERE table_name = 'EMP';
     COMMIT;
   END IF;
   CLOSE cur1;
END;
/
EXIT

Let's try this script, and see if the UPDATE to the sal column that I did at the beginning of this post are properly replicated to MariaDB:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And then we check what we have in MariaDB:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+----------+-----------+----------+--------+
| empno | ename    | job       | sal      | deptno |
+-------+----------+-----------+----------+--------+
|     1 | Anders   | Sales     | 10000.00 |     10 |
|     2 | John     | Developer | 12100.00 |     20 |
|     3 | Georgina | Design    | 11100.00 |     20 |
|     4 | Anne     | Assistant |  8000.00 |     30 |
|     5 | Marge    | HR Mgr    | 14000.00 |     30 |
+-------+----------+-----------+----------+--------+
5 rows in set (0.00 sec)
And then see if we can also handle a DELETE. In SQL*Plus we run this:
SQL> DELETE FROM emp WHERE ename = 'John';
SQL> COMMIT;
The we run out script:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders
And then we check what we have in MariaDB:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+----------+-----------+----------+--------+
| empno | ename    | job       | sal      | deptno |
+-------+----------+-----------+----------+--------+
|     1 | Anders   | Sales     | 10000.00 |     10 |
|     3 | Georgina | Design    | 11100.00 |     20 |
|     4 | Anne     | Assistant |  8000.00 |     30 |
|     5 | Marge    | HR Mgr    | 14000.00 |     30 |
+-------+----------+-----------+----------+--------+
4 rows in set (0.00 sec)

Bazinga, it worked! Are we done then? I'm afraid not, we still have transactions to take care of, but that has to wait a day or two.

Happy SQL'ing
/Karlsson

Friday, September 16, 2016

Replication from Oracle to MariaDB - Part 2

In part 1 on this series of blogs on how to replcate from Oracle to MariaDB, we looked at how to get data out from Oracle, and in an effort to look more modern than I really am, let's say that what we were attempting was CDC (Change Data Capture). Hey, I feel like I'm 18 again! Modern, cool! Maybe I should start to wear a baseball cap backwards and have my jeans no higher than my knees. Or again, maybe not. But CDC it is.

When I left you in the previous post we have just started to get some data. reluctantly, out of Oracle. But the SQL statement we got from the LogMiner was hardly useful. But fact is, we can fix that. Before we do that we need to look at a few other functions in LogMiner. The two functions we want to loot at are called DBMS_LOGMNR.COLUMN_PRESENT and DBMS_LOGMNR.MINE_VALUE. These two functions are simple enough, they both take two arguments, first a reference to a redo log record, which is the value of the column redo_value in the V$LOGMNR_CONTENTS table, and the other argument is the fully qualified name of the table in question. The table name of the redo log record is, surprisingly, in the table_name column. The reason we need two functions for this is that the second of these functions can return NULL, both when the column is question doesn't exist and well as when the value is NULL.

I will for the rest of this post assume that there exists an Oracle user called anders with the password anders and that the same thing exists in MariaDB, where there is also a database called anders (you already guess that, right?) where the user anders has full access.

So with the table_name of the value of the involved columns, we are ready to go. Sure. let's look at an insert.  First we need a table to test this on, and let's use the classic Oracle EMP sample table for this, but I have simplified this a bit to make my example easier to grasp without getting into too many details. So this is what it looks on the Oracle side of things, and let's call this script cretabora.sql:
CREATE TABLE anders.emp(
  empno NUMBER NOT NULL PRIMARY KEY,
  ename VARCHAR2(10) NOT NULL,
  job VARCHAR2(9) NOT NULL,
  sal NUMBER(7,2) NOT NULL,
  deptno NUMBER NOT NULL) TABLESPACE users;
What we also need is some way to keep track of processing, in this case we want to keep track of the SCN that was last used, so we know where to start when we get the next chunk of CDCs (He, I used it again. Boy am I young and hot today). And add this to the script above:
CREATE TABLE scnlog(
  table_name VARCHAR2(30) NOT NULL PRIMARY KEY,
  scn NUMBER) TABLESPACE users;
INSERT INTO anders.scnlog
  SELECT 'EMP', current_scn FROM v$database;
I also allocated space for the EMP table and set the SCN to the current database SCN. As I am using the V$DATABASE virtual table, I have to run the above as a privileged user.
$ sqlplus / as sysdba @cretabora.sql

OK, now I have some tables to work with. Before to go on let's also create the same objects in a MariaDB database. The minimum thing we need is something is, let's call this script cretabmaria.sql:
CREATE TABLE emp(
  empno INTEGER NOT NULL PRIMARY KEY,
  ename VARCHAR(10) NOT NULL,
  job VARCHAR(9) NOT NULL,
  sal DECIMAL(7,2) NOT NULL,
  deptno INTEGER NOT NULL);
I will not explain this in detail, you get it already I think:
$ mysql -u root
MariaDB [(none)]> CREATE USER 'anders'@'%' IDENTIFIED BY 'anders';
MariaDB [(none)]> CREATE DATABASE anders;
MariaDB [(none)]> GRANT ALL ON anders.* TO 'anders'@'%';
MariaDB [(none)]> exit
$ mysql -u anders -panders anders < cretabmaria.sql
The above really shows that MariaDB is easier to deal with than Oracle, right? But that wasn't what we were looking for here, and we're getting closer now, aren't you excited?

Let's insert a single row of data in the EMP table, just to get started:
$ sqlplus anders/anders
SQL> INSERT INTO emp VALUES(1, 'Anders', 'Sales', 10000, 10);
SQL> COMMIT;

Now we are ready to extract data from the redo log. For this we will run a small script that runs LogMiner, extracts rows from the redo log and converts that to valid MariaDB SQL syntax. Create a script called cdcemp.sql with this content:
SET serveroutput ON
SET linesize 4000
SET feedback off
DECLARE
   v_scn NUMBER;
   v_scnstart NUMBER;
   v_scnend NUMBER;
   v_redo_value NUMBER;
   v_sqlstmt VARCHAR2(4000);
   CURSOR cur1 IS
     SELECT scn, redo_value
       FROM v$logmnr_contents
       WHERE seg_owner = 'ANDERS' AND table_name = 'EMP'
         AND operation = 'INSERT';
BEGIN
-- Start LogMiner.
   SELECT NVL(MIN(l.scn) + 1, MIN(e.ora_rowscn)) INTO v_scnstart
     FROM anders.emp e, scnlog l
     WHERE l.table_name = 'EMP';
   SELECT MAX(e.ora_rowscn) INTO v_scnend
     FROM anders.emp e;
   DBMS_OUTPUT.PUT_LINE('-- SCN Range: ' || v_scnstart || ' - ' || v_scnend);
   DBMS_LOGMNR.START_LOGMNR(STARTSCN => v_scnstart,
     ENDSCN => v_scnend,
     OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY +
       DBMS_LOGMNR.CONTINUOUS_MINE +
       DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
-- Dummy update to insert a more recent SCN to ensure
-- that we don't have to wait for the next update in the
-- following select.
   UPDATE scnlog SET table_name = table_name WHERE table_name = 'EMP';
   COMMIT;

-- Open cursor to get data from LogMiner.
   OPEN cur1;

-- Loop for all the rows in the redo log since the last time we ran this.
   LOOP
      FETCH cur1 INTO v_scn, v_redo_value;
      EXIT WHEN cur1%NOTFOUND;
      v_sqlstmt := 'INSERT INTO emp(EMPNO, ENAME, JOB, SAL, DEPTNO)'
        || 'VALUES(';
      v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
         'ANDERS.EMP.EMPNO') || ', ';
      v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
        'ANDERS.EMP.ENAME') || ''', ';
      v_sqlstmt := v_sqlstmt || '''' || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
        'ANDERS.EMP.JOB') || ''', ';
      v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
        'ANDERS.EMP.SAL') || ', ';
      v_sqlstmt := v_sqlstmt || DBMS_LOGMNR.MINE_VALUE(v_redo_value,
        'ANDERS.EMP.DEPTNO') || ')';

      DBMS_OUTPUT.PUT_LINE('-- SCN = ' || v_scn);
      DBMS_OUTPUT.PUT_LINE(v_sqlstmt || ';');
   END LOOP;
   IF v_scn IS NOT NULL THEN
      UPDATE scnlog SET scn = v_scn WHERE table_name = 'EMP';
      COMMIT;
   END IF;
   CLOSE cur1;
END;
/
EXIT

The first three lines in this script are SQL*Plus specific settings. Then we declare some variables and a cursor for the V$LOGMNR_CONTENTS table. Following that we get the starting and ending SCN that we want and use that to start LogMiner and output some information on what we are doing. Then comes a wacko UPDATE. This seems to be necessary to ensure that the redo log is  progressed past the last log record so that we can get the data. Without this the SELECT from the cursor would wait until someone else generated some redo log content. This is a kludge, I know, but it seems to work. Also, Oracle isn't smart enough to skip these kinds of dummy updates.

Following this the cursor is opened and we get the rows from it and build a suitable SQL statement for MariaDB. Finally when the loop is over I update the SCN log table so I know where to start the next run. This script can be done to generate incremental updates from Oracle to be inserted into MariaDB. And with all that, let's now finally run the script, and here I will run it and extract the data to MariaDB:
$ sqlplus -S / as sysdba @cdcemp.sql | mysql -u anders -panders anders

Before we finish, let's check if this worked:
$ mysql -u anders -panders anders
MariaDB [anders]> SELECT * FROM emp;
+-------+--------+-------+----------+--------+
| empno | ename  | job   | sal      | deptno |
+-------+--------+-------+----------+--------+
|     1 | Anders | Sales | 10000.00 |     10 |
+-------+--------+-------+----------+--------+
1 row in set (0.00 sec)

That's it for this time. There are few few more things to look into though, like managing transactions on the MariaDB side for example. And UPDATEs and DELETEs, those require some special care. But we are getting there now, right? So stay tuned for the third post in this series.

Happy SQL'ing
/Karlsson