This time I’ll explain a challenge that I had with optimization of RPM worksheet tables.

Even if you have all elements of enterprise architecture tuned, there is still something you can do to speed up the Query or DML operations.

Here I’ll present what impact from the physical design you can expect.

All examples will use RPM_WORKSHEET_ZONE_DATA table from Oracle Retail Price Management module as a perfect example.

From that table I’ll create custom XX_RPM_WRKSHT_ZONE_DATA table, that will have same number of records as RPM_WORKSHEET_ZONE_DATA, but different number of columns.

While RPM_WORKSHEET_ZONE_DATA has 238 columns, our custom table XX_RPM_WRKSHT_ZONE_DATA will have only 8 columns.


create table XX_RPM_WRKSHT_ZONE_DATA
(	WORKSHEET_ITEM_DATA_ID,
	STATE,  
	ACTION_FLAG, 
	WORKSHEET_STATUS_ID,
	ZONE_ID, 
	DEPT, 
	CLASS, 
	SUBCLASS
   ) 
as
select 
WORKSHEET_ITEM_DATA_ID, 
STATE, 
ACTION_FLAG, 
WORKSHEET_STATUS_ID, 
ZONE_ID, 
DEPT, 
CLASS, 
SUBCLASS
from RPM_WORKSHEET_ZONE_DATA;   

1.
In the first test, I’ll show difference when updating width (238 columns) vanilla RPM_WORKSHEET_ZONE_DATA table comparing with same update against custom, narrow (8 columns only) XX_RPM_WRKSHT_ZONE_DATA table that have the same number of records as vanilla RPM table.


update RPM_WORKSHEET_ZONE_DATA set action_flag = 4 , state =1 where dept =999;

610792 rows updated.
Elapsed: 00:01:16.73

Executing the same DML on custom table:


update xx_rpm_wrksht_zone_data set action_flag = 4 , state =1 where dept =999;

610792 rows updated.
Elapsed: 00:00:08.59

From results it’s visible that we updated exactly the same number of records (610.792 records in both cases).

But if you look at the time needed for UPDATE to complete, in the first case Oracle needs 1 minute and 16 seconds, while in case of our custom, narrow (8 columns width) table, the same task completes in a little less than 9 seconds.

That is almost 9x difference. If I repeat the same test, difference will jump to almost 11x.

2.
Next test will confront the same two tables from previous example, but this time I’ll execute ordinary SELECT statement.

First test is against vanilla table.


select /*+ full(a) */ * from RPM_WORKSHEET_ZONE_DATA a where dept = 999;

610792 rows selected.

Elapsed: 00:01:56.00

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
     539350  consistent gets
     539333  physical reads
          0  redo size
  142799428  bytes sent via SQL*Net to client
     448429  bytes received via SQL*Net from client
      40721  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     610792  rows processed

I’m repeating the same test with custom table with only 8 columns.


select /*+ full(a) */ * from xx_rpm_wrksht_zone_data a where dept = 999;

610792 rows selected.

Elapsed: 00:00:08.12

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      67490  consistent gets
          0  physical reads
          0  redo size
   15851696  bytes sent via SQL*Net to client
     448429  bytes received via SQL*Net from client
      40721  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     610792  rows processed

Again, we are selecting the same number of records in both case (like in the previous UPDATE example).

Time elapsed in the first case is almost two minutes (1 minute 56 seconds).

In the second test against my custom table, time elapsed is only 8 second (8,12 seconds to be precise).

This is 14,3x faster than in vanilla case.

Although it looks like artifical, this case comes from real life performance tuning task that was assigned to me.

Now that I provide evidence of what impact can you expect in similar cases and how important role of physical design is, let’s take a look at what is the cause for such difference.

Part of the answers I’m exposing here, by presenting more Oracle statistics where can we see two most important points.

Number of consistent gets is almost 8x smaller in the second case, which means that Oracle is reading significantly less number of block reads.

The number of bytes transfer to client is also 9x smaller in the second case.

This article can trigger discussion in many directions, from RPM – Retail Price management design questions to the how to find the optimal column order.

That can be covered sometimes in the future.


Comments

There are no comments yet. Why not start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *