Oracle Turf

Rajiv Iyer's observations on Oracle Performance

Cardinality Estimation, precompute_subquery hint

with 2 comments

Recently, I posted in Oracle List to check if somebody could help me with a workaround for a query where cardinality estimates were significantly different from the actual rows processed. Jonathan Lewis suggested rewriting the sql into a subquery and using the hint “precompute_subquery”. Not surprising, it worked like a charm.

Here is a small test case in that demonstrates the problem & workaround.

Create BILLING and LOCATION tables

--create billing
create table billing as
with vw as
select rownum id from dual connect by level <= 1000
rownum bill_id,
when rownum <= 50000 then
case when mod(rownum,50000)=0 then
when mod(rownum,5000)=0 then
when mod(rownum,500)=0 then
when mod(rownum,50)=0 then
when mod(rownum,5)=0 then
end location_id
from vw, vw
where rownum <= 100000;

--create location
create table location as
select location_id, 'X-'||rownum location_name
select distinct location_id from billing
) order by location_id;

--Gather Stats
exec dbms_stats.gather_table_stats(user,'BILLING');
exec dbms_stats.gather_table_stats(user,'LOCATION');

Data distribution

select a.location_id,b.location_name,count(*) from billing a, location b
where a.location_id = b.location_id
group by a.location_id,b.location_name
order by 3;

----------- ------------- ----------
        990 X-4                    1
        991 X-3                    9
        992 X-2                   90
        993 X-1                  900
        994 X-6                 9000
        995 X-5                40000
        996 X-7                50000

Plan for a query looking for billing details related to location ‘X-3’.

select * from billing a, location b where a.location_id = b.location_id
and b.location_name = 'X-3'

Plan hash value: 725447293

| Id  | Operation          | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
|   0 | SELECT STATEMENT   |          |      1 |        |      9 |00:00:00.03 |     228 |       |       |          |
|*  1 |  HASH JOIN         |          |      1 |  14286 |      9 |00:00:00.03 |     228 |  1236K|  1236K|  414K (0)|
|*  2 |   TABLE ACCESS FULL| LOCATION |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|   3 |   TABLE ACCESS FULL| BILLING  |      1 |    100K|    100K|00:00:00.01 |     225 |       |       |          |

Predicate Information (identified by operation id):

1 - access("A"."LOCATION_ID"="B"."LOCATION_ID")
2 - filter("B"."LOCATION_NAME"='X-3')

The “Hash Join” operation shows that the optimizer estimated 14286 rows whereas the actual rows processed were just 9. That’s because the optimizer has its limitations. It can estimate only based on the available statistics. In this case it calculated
Cardinality = num of rows/num of distinct values = 100000/7 = 14286 (rounded)

Rewriting the query and using the “precompute_subquery” hint and additionally creating a histogram gives us a much better cardinality estimation.

exec dbms_stats.gather_table_stats(user,'BILLING',method_opt='FOR COLUMNS LOCATION_ID SIZE AUTO');

explain plan for
select * from billing where location_id in (select /*+ precompute_subquery */ location_id from location where location_name = 'X-3');

Plan hash value: 2877307530

| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |         |    53 |   583 |    66   (4)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| BILLING |    53 |   583 |    66   (4)| 00:00:01 |

Predicate Information (identified by operation id):

1 - filter("LOCATION_ID"=991)

Our test case dealt with only 2 tables but with many tables such similar scenarios can lead to less desirable execution plans. The “precompute_subquery” offers a good workaround. It is not clear yet if the hint is documented. Checking with Oracle Support may help.


Written by rajiviyer

August 30, 2013 at 10:46 pm

Posted in Optimizer


leave a comment »

ORA-04030 has been a frequent topic of discussion. We have seen people suggesting ways of avoiding it either by setting appropriate process limits in the OS or fixing bugs associated with it in different Oracle Releases.  But recently I saw a stored procedure being responsible for this error.

One of our Non Production Database servers running Oracle was experiencing high memory consumption. The culprit was a session executing a stored procedure and using 3GB of memory. The developers had no reason to suspect any problem with the code which had been working for ages. Indeed nothing had changed in the code. But then, something HAD changed. Let’s explore.

Let’s consider the version of the procedure stripped to the bare minimum.

create table tab_case(
file_id number,
case_id varchar2(255),
submit_date date

insert into tab_case(file_id,case_id,submit_date) values (145,100,sysdate-7);
insert into tab_case(file_id,case_id,submit_date) values (146,101,sysdate-7);
insert into tab_case(file_id,case_id,submit_date) values (199,1031,sysdate);
insert into tab_case(file_id,case_id,submit_date) values (200,1405,sysdate);

Create types to hold the error records.

create or replace type err_rec_type as object (
rec_number number,
err_message varchar2(255)
create or replace type err_tbl_type as table of err_rec_type;

Create the stored procedure.
It’s a very simple procedure which fetches data through a cursor and assigns it to a rowtype variable. Any error is caught in the exception block and the details are captured in the error record type. The loop is supposed to continue till the last record is fetched from the cursor.

create or replace procedure case_prc(p_err_rec out err_tbl_type) as
case_rec tab_case%rowtype;
cs sys_refcursor;
v_err_index number :=1;
v_ora_mesg varchar2(255);
p_err_rec := err_tbl_type();
open cs for select file_id,case_id,submit_date from tab_case where file_id in (145,146);
fetch cs into case_rec;
exit when cs%notfound;
when others then
v_ora_mesg := substr(sqlerrm,1,255);
p_err_rec(v_err_index) := err_rec_type(case_rec.file_id,v_ora_mesg);
v_err_index := v_err_index + 1;
end loop;
close cs;

Execute the procedure. It runs almost instantly

p_err_rec apps.err_tbl_type;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.22

Now let’s assume there is a business requirement and we add a new column to the table TAB_CASE

alter table tab_case add op_unit varchar2(5) default 'IND';

And we rerun the procedure


1) Column mismatch in the following part of the code causes an exception

fetch cs into case_rec;
exit when cs%notfound;

(case_rec is expecting an additional column op_unit)
This error always caught by the exception block and this result in an infinite loop.
4) Finally the session ends with an ORA-4030 error when error_tbl_type is not longer able to extend itself to hold more records.
5) The alert log shows the following messages which confirm the same.
ORA-04030: out of process memory when trying to allocate 16328 bytes (koh-kghu call ,pmuccst: adt/record)
ORA-06504: PL/SQL: Return types of Result Set variables or query do not match
So to summarize, an inefficient PL/SQL may cause an ORA-04030 error. Also other accompanying error messages may throw more light on the issue.

Written by rajiviyer

August 17, 2013 at 1:06 am

Posted in PL/SQL