On many sites where I’ve been involved in troubleshooting and performance tuning, I can find usage of wrong technology for a wrong use case.

Usually client technology experts are keen with some new features of some technology and want to solve all kind of problems by using it.

In practice, new features are added to cover only specific use cases, otherwise it would be present as part of the product from the very beginning.

 

In this case I’ll explain one very common use case, when application (in this case Java based app) is running in production with enabled debug mode.

Reason for that is always justified with easier troubleshooting of technical and especially functional issues.
To achieve that, in case of Oracle, LOBs (Oracle Large Objects) have been used.

 

Oracle is well known as the most advanced, feature reach database system which includes object, OLAP, Graph, Spatial, Machine Learning, XML, JSON and many other variants of database systems, but the relational database management role is what Oracle is created for.

This is exactly what many developers and newbies often forgot.

You can use XML database or convert Oracle to REST API data store and do many crazy thinks, but RDBMS is the primary role of Oracle in terms of scalability, HA, security, performance etc.

 

Ok, so how developers are solving persistence for debugging purposes?

Naturally by using LOBs by storing troubleshooting information in XMLTYPE, CLOBS or or even JSON columns in newer version of Oracle rdbms.

It is out of scope of this article to provide explanation of working mechanism of LOBs, but I’ll provide a few hints you need to take a look at:

  • undo / redo
  • storage in / out of the row
  • caching mechanism
  • indexing

 

What is important to understand, especially in cases where you have a lot of DML operations, is that storing such extensive troubleshooting information in a database is not the best idea.

Down below you can find the DML statement (similar statement can be found everywhere) which implements the logic

 

merge into xxx_log PARTITION ( p_20200511 ) l USING dual ON ( log_id = :1 )
WHEN NOT MATCHED THEN
INSERT (
    id,
    log_level,
    message,
    username,
    component,
    start_date,
    end_date,
    log_id,
    log_date )
VALUES
    ( log_seq.NEXTVAL,
    :2,
    :3,
    :4,
    :5,
    :6,
    :7,
    :8,
    :9,
...
    sysdate )
WHEN MATCHED THEN UPDATE
SET start_date = nvl(:16, start_date),
    end_date = nvl(:17, end_date),
...
    exception_data = nvl(:23, exception_data),
    log_level = decode(log_level, 'ERROR', log_level, :24); 

On the following slide (picture taken for one particular SQL) you can see that most of the time database time is spent on IO (disk) and to process only one row, SQL consumes 20Kb.

Database time spent for query

If you have only a couple of executions, that would be fine.

But what of you have dozen thousands execution as you can see on the following slide?

Executions

If I translate number of executions into the number of Bytes, you can see that peak consumption is about 6.5GB per second which will probably bring your storage down to the knees and your database as a consequence of that.

I/O bytes

Finally you can take a look at the execution plan where the most activity is happening in the MERGE itself, where UPDATE of the CLOB columns is actually performed.

Execution plan for a MERGE statement

In one of the future posts I’ll explain much better way for troubleshooting functional issues.



Get notified when a new post is published!

Loading

Comments

There are no comments yet. Why not start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.