In this post I’ll explain one case when Manual memory management is superior over letting Oracle to automatically manage PGA memory.
Recently I need to create a test table with 10 million rows by using CONNECT BY clause.
When I issue the following DDL command:
create table test as
select trunc(sysdate) + (2 * rownum - 1) / (24 * 60) as timestamp,
trunc(dbms_random.value(1, 20 + 1)) * 5 cache
from dual
connect by level <= 10000000;
I’m getting the following error:
Error report -
ORA-30009: Not enough memory for CONNECT BY operation
30009. 0000 - "Not enough memory for %s operation"
*Cause: The memory size was not sufficient to process all the levels of the hierarchy specified by the query.
*Action: In WORKAREA_SIZE_POLICY=AUTO mode, set PGA_AGGREGATE_TARGET to
a reasonably larger value.
Or, in WORKAREA_SIZE_POLICY=MANUAL mode, set SORT_AREA_SIZE to a reasonably larger value.
PGA parameters where set to the following values:
- pga_aggregate_limit big integer 4G
- pga_aggregate_target big integer 2G
and WORKAREA_SIZE_POLICY was set to AUTO.
With such setup, Oracle automatically control and allocate related memory segments such as SORT_AREA & HASH size.
Even if I increase parameter PGA_AGGREGATE_TARGET to 4G (which requires to bounce the database), I’m still getting the same error.
The following query reveals why.
SELECT x.ksppinm name,
CASE WHEN x.ksppinm like '%pga%' THEN round(to_number(y.ksppstvl)/1024/1024, 0)
ELSE to_number(y.ksppstvl)
END AS value,
x.ksppdesc description
FROM x$ksppi x, x$ksppcv y
WHERE x.inst_id = userenv('Instance')
AND y.inst_id = userenv('Instance')
AND x.indx = y.indx
AND x.ksppinm IN ('pga_aggregate_target', '_pga_max_size',
'_smm_max_size', '_smm_px_max_size');
NAME VALUE DESCRIPTION
pga_aggregate_target 2048 Target size for the aggregate PGA memory consumed by the instance
_pga_max_size 410 Maximum size of the PGA memory for one process
_smm_max_size 209710 maximum work area size in auto mode (serial)
_smm_px_max_size 1048576 maximum work area size in auto mode (global)
The most important parameter is _pga_max_size, which displays maximum size of the PGA available for one process.
If I further increase PGA memory to 8G, I can finally complete a table creation.
With PGA_AGGREGATE_TARGET I’m getting the following parameters:
pga_aggregate_target 8192 Target size for the aggregate PGA memory consumed by the instance
_pga_max_size 1638 Maximum size of the PGA memory for one process
_smm_max_size 838860 maximum work area size in auto mode (serial)
_smm_px_max_size 4194304 maximum work area size in auto mode (global)
For several reasons (bouncing the whole database while guessing and setting a large value for PGA that needs to be allocated, many restrictions like _pga_max_size…), presented way is obviously not the best way to do a job.
Much better way is to change required parameter on a session level where no database restart is needed.
In this case I’ll change the policy to manual and increase the SORT & HASH area sizes.
Question you may ask is how can I find out what value should I set?
To make it simple, instead of using CONNECT BY statement, let’s pretend that table has been created and I want to create another one based on it.
The following statement I’m going to use:
create table test2 as
select /*+ no_gather_optimizer_statistics */ * from test order by 1;
One Note:
From 12c release, Oracle automatically collects table statistics, which is a new step in the execution plan.
By using /*+ no_gather_optimizer_statistics */ hint I can disable auto statistics gathering.
Next you need to change memory management to manual:
ALTER SESSION SET workarea_size_policy = manual;
After that you can leave defaults, as even in case you are getting the same error as before, you’ll know exactly how much memory you need to allocate to be able to create a table:
In this case I’m setting SORT_AREA_SIZE to just 1K.
ALTER SESSION SET sort_area_size = 1024;
and interrupt the query execution after 10 seconds (query will still be executing, but as It’ll use TEMP segments, it will be slow).
We can confirm that change is applied by executing:
select name, value from v$parameter where name like '%sort_area_size%';
sort_area_size 1024
This change will affect HASH_AREA_SIZE, which will be double of the SORT_AREA_SIZE, as can be seen from the following query:
select * from v$parameter where name like '%hash%';
hash_area_size 2048
Now you can execute CTAS to query test2 table:
create table test2 as
select /*+ no_gather_optimizer_statistics */ * from test order by 1;
Then you need to find sql_id of that statement and execute the following query to find out memory that you need to allocate for sorting:
with tmp as
(select timestamp, operation, estimated_optimal_size from v$sql_plan_statistics_all where sql_id = '8wc56sft57azf')
select operation, estimated_optimal_size from tmp where timestamp = (select max(timestamp) from tmp);
OPERATION ESTIMATED_OPTIMAL_SIZE
CREATE TABLE STATEMENT Null
LOAD AS SELECT 36315136
SORT 228123648
TABLE ACCESS Null
Now if I set SORT_AREA_SIZE parameter to be slightly higher than SORT step in the execution plan:
ALTER SESSION SET sort_area_size = 238123648;
and execute CTAS again:
create table test2 as
select /*+ no_gather_optimizer_statistics */ * from test order by 1;
Table TEST2 created.
table will be created, after which you can revert back memory management to AUTO.
ALTER SESSION SET workarea_size_policy = auto;
Summary:
Even though new DBA’s might even not know it exist, manual memory management still plays its role in certain situations.
Here I’ve explained its benefit as you don’t need to bounce the whole database which will affect all users. By setting memory parameters manually at the session level, I don’t need to bounce the database.
Additionally manual memory management approach in this case is more efficient as I need to allocate only 220M of memory to complete CTAS.
Compare that with 8G that I need to allocate for PGA_AGGREGATE_TARGET in case of AUTO memory management, due to hidden parameters (_pga_max_size, _smm_max_size & _smm_px_max_size) that impose additional limits.
Probably that’s the reason behind Oracle’s decision to keep manual memory management even in 19c version of Oracle database.
Comments