Many times I’ve been hearing how REDO is the most important to tune in order to have database running fast, but very rare I can hear that someone is mention UNDO as another piece of puzzle that is also extremely important to keep in good shape.

In this article I’ll try to avoid what you can find in many Oracle books and try to concentrate only on what I experience on field that might be useful for others.

Oracle Retail batches are very good example where you need to tune UNDO if you want to maintain good performances.

I assume that you already know how redo/undo are working together to provide ACID.

I also assume you understand that DELETE statement generates the most UNDO blocks.

Here I’ll cover three typical examples that can help you to understand performance impact of UNDO functionality.

If there will be interest, I can create performance tests to support what I’m sharing in this blog.

Enough talks, let’s start with first example.

 

1.
In the first example I’ll demonstrate impact of UNDO when dealing with regular, permanent table.

I need to create ttest table based on my example table with 100k rows.


create table ttest tablespace users 
as
select * from tbase where rownum <= 100000;

At this point it’s important to reconnect your database connection to reset session statistics.

Now let’s perform DELETE operation that will generate the most UNDO blocks. 

delete from ttest;

select used_ublk
from v$transaction
where addr = 
(select taddr from v$session where sid = 
(select sid from v$mystat where rownum = 1 
)
);

used_ublk 2382


select *
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%undo%';

undo change vector size 18792772

As you can see, to delete 100k records from the table I have generated 2382 blocks and undo change vector size of 18792772.

If I execute select statement from separate session, to read only the first 100 rows (defined by arraysize), SELECT statement needs to read some blocks from ttest table but also some blocks from UNDO to be able to get all data in consistent manner, as DELETE statement from the first session didn’t commit yet.


select * from ttest;

select *
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%undo%';

data blocks consistent reads - undo records applied 6602

This statistics is extremely important to note.

Again, you need to know a bit about Oracle internals to really understand importance of provided example, but on a higher level it’s enough to tell you that due to read consistency, all other sessions will need to use both: table blocks and UNDO segments to read blocks that have been changed by the first session.

Reading blocks from UNDO might lead to physical reading from the disk.

I’m using a word “might” as UNDO blocks behave like regular table blocks, which means they will be affected by Oracle buffer cache mechanism (LRU alghoritm).

For that reason UNDO blocks will sit either in buffer cache or they will be read from physical disks.

The amount of UNDO generated blocks depend on DML operation (DELETE has the heaviest impact, while INSERT has a lowest number of UNDO generated blocks), number and type of columns (Varchar, Char, LOB…) and of course, number of rows that DML statement has changed.

All in all, now it should be clear why it’s important to have UNDO also placed on the fastest (flash) disks.

At the end of example, don’t forget to drop created table.


drop table ttest purge;

 

2.
In the second example I’ll use GTT table of type on commit preserve rows.


create global temporary table ttest_gtt_preserve
on commit preserve rows
as
select * from tbase where rownum <= 100000; select used_ublk from v$transaction where addr = (select taddr from v$session where sid = (select sid from v$mystat where rownum = 1 ) ); no rows returned select * from v$statname a, v$mystat b where a.statistic# = b.statistic# and lower(a.name) like '%undo%' and value > 0;

undo change vector size 18272



delete from ttest_gtt_preserve;


select used_ublk
from v$transaction
where addr = 
(select taddr from v$session where sid = 
(select sid from v$mystat where rownum = 1 
)
);

used_ublk 2382


select *
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%undo%' and value > 0;

undo change vector size     18883680

As you can see, undo change vector size and used_ublk (2382) are almost identical as in previous case with permanent table.

There is no difference between permanent and GTT tables regarding the number of generated UNDO blocks.

In this case there is no point to execute select statement from the separate session, as per nature of GTT tables (no matter if they are on commit preserve or delete rows), other session will never be able to see data from the first session.

At the end, let’s clean up GTT table.


drop table ttest_gtt_preserve;

 

3.
In the last example I want to check impact on UNDO of GTT on commit delete rows type of GTT tables.

As I’m using GTT on commit delete rows type of GTT table, first I need to create table structure (for that reason you can see WHERE 1 = 2), as table creation performs implicit commit which will delete all loaded records, and in the second step I’m just inserting 100k rows.

For that reason, sessions stats needs to be calculated after each step to get the proper results.


create global temporary table ttest_gtt_delete
on commit delete rows
as
select * from tbase where 1 = 2;

select *
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%undo%' and value > 0;

undo change vector size 37004

insert into ttest_gtt_delete
select * from tlala where rownum <= 100000; select * from v$statname a, v$mystat b where a.statistic# = b.statistic# and lower(a.name) like '%undo%' and value > 0;

undo change vector size 349424


select used_ublk
from v$transaction
where addr = 
(select taddr from v$session where sid = 
(select sid from v$mystat where rownum = 1 
)
);

used_ublk 40

delete from ttest_gtt_delete;


select *
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%undo%' and value > 0;

undo change vector size 19247448


select used_ublk
from v$transaction
where addr = 
(select taddr from v$session where sid = 
(select sid from v$mystat where rownum = 1 
)
);

used_ublk 2421

All the steps are the same as in previous example.
The most important is to find out that both type of GTT tables have identical impact on UNDO which is identical with permanent table example.

Let’s clean up GTT table used in this example:


drop table ttest_gtt_delete;

 

Summary:

By now I hope you can understand why UNDO is also important to tune.

In the first example I stressed the case that you can often see during the batch jobs, when one session has deleted (or updated) many rows without commit, while at the same time several other sessions need to read from the same table.

To get consistent view, sessions need to read not only database table blocks, but UNDO blocks as well.

By adding more threads/parallel session, impact of UNDO blocks on performance will increase.

From the last two examples, the most important point to note when you considering UNDO is, there are no difference in behavior between two types of GTT tables: on commit preserve rows and on commit delete rows.

Even when comparing with a regular, permanent table, amount of generated UNDO blocks remain the same.

Thus although some of Oracle DBA’s/Developers believes that by using GTT tables they will reduce amount of generated REDO segments, they forget on UNDO, which will be the same no matter if you are using permanent or temporary table.

When you are deleting or updating a lot of records from the large table, many UNDO blocks will be written on disk.

Faster disk means faster writes in that case. For that reason we want to have fast disks dedicated to UNDO.

UNDO impact on performance is most visible in batch processing where several sessions or threads are changing the same table at the same time.

In Oracle Retail, tables like ITEM_LOC_SOH is heavily updated through the Oracle Forms (meaning commit can be delayed for several minutes), while the same time is playing mayor role in many Oracle Retail batch processing, is typical example from the real world when UNDO impact on database performance is clearly visible.

I didn’t cover snapshot to old errors and how to tune UNDO size, manual vs auto UNDO and many other UNDO related topics that I left for some post I’ll write in the future.

 


Comments

There are no comments yet. Why not start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.