People – not just People rather they are your resources!

April 11, 2011 1 comment

Other people are your greatest resources! They give birth to you; they feed you, dress you, provide you with money, make you laugh and cry.  They comfort you, heal you, invest your money, service your car and bury you.  We can’t live without them.  We can’t even die without them.

Categories: Oracle, Performance

Consistent gets – How Many?

February 10, 2014 Leave a comment

When you execute an SQL – why there is a difference in Consistent gets on the same set of data for same SQL.    For any SELECT query, oracle need to prepare the consistent data in the buffer cache using undo records then forward the data the requesting session as of a specific SCN.  Touching any block in buffer cache to prepare the blocks for consistent data is known as Consistent Reads.

In an ideal situation, to read a row from a block just need one consistent get.


SQL> select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 CON$

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

 

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

 

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

1  consistent gets

0  physical reads

0  redo size

589  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1  rows processed

But when I run the same SQL again after a transaction from different session, the number of consistent gets gone up to 3, just to read a single row.


SQL> select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 ABC

 

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

 

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

3  consistent gets

0  physical reads

108  redo size

588  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1  rows processed

The 3 consistent gets may be (but later found to be incorrect!!)

  • Access the current block
  • Create a new CONSISTENT block and copy the CURRENT block contents
  • Access the undo block and rollback the uncommitted transaction

So, what if I have one more transaction going on a different row on the same block?  Need to access one more undo block to rollback the second uncommitted transaction, so that you will get 4 consistent gets.


SQL> select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 ABC

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

5  consistent gets

0  physical reads

108  redo size

588  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1   rows processed

Contrary to my expectation of one more consistent get, the number of consistent gets grown by 2 reaching to 5 gets.  I am sure, that I need to read one extra undo block for the uncommitted transaction, but what is the 5th block?

10046 trace show me that I was totally wrong – even my earlier assumption.  There were only one block read from the table and rest of the 4 reads from the UNDO file.


WAIT #2: nam='db file sequential read' ela= 34 file#=1 block#=210889 blocks=1 obj#=63125 tim=39307229867247

WAIT #2: nam='db file sequential read' ela= 25 file#=3 block#=208 blocks=1 obj#=0 tim=39307229867390

WAIT #2: nam='db file sequential read' ela= 22 file#=3 block#=224 blocks=1 obj#=0 tim=39307229867454

WAIT #2: nam='db file sequential read' ela= 22 file#=3 block#=439 blocks=1 obj#=0 tim=39307229867585

WAIT #2: nam='db file sequential read' ela= 21 file#=3 block#=157 blocks=1 obj#=0 tim=39307229867696

First undo block


Block dump from disk:

buffer tsn: 2 rdba: 0x00c000d0 (3/208)

scn: 0x01a7.37e301cf seq: 0x01 flg: 0x04 tail: 0x01cf2601

frmt: 0x02 chkval: 0xd8b1 type: 0x26=KTU SMU HEADER BLOCK

Second undo block


buffer tsn: 2 rdba: 0x00c000e0 (3/224)

scn: 0x01a7.37e300bb seq: 0x01 flg: 0x04 tail: 0x00bb2601

frmt: 0x02 chkval: 0xa4d2 type: 0x26=KTU SMU HEADER BLOCK

3rd undo block


buffer tsn: 2 rdba: 0x00c001b7 (3/439)

scn: 0x01a7.37e301cf seq: 0x01 flg: 0x04 tail: 0x01cf0201

frmt: 0x02 chkval: 0xc91c type: 0x02=KTU UNDO BLOCK

4th undo block


buffer tsn: 2 rdba: 0x00c0009d (3/157)

scn: 0x01a7.37e300bb seq: 0x01 flg: 0x04 tail: 0x00bb0201

frmt: 0x02 chkval: 0x5f27 type: 0x02=KTU UNDO BLOCK

If you look closely, a pair of blocks under same SCN and the first block type is “KTU SMU HEADER BLOCK” is the undo header block and the second one is the undo block of type “KTU UNDO BLOCK”.  So, oracle is reading 1 consistent get to copy the current block, and for each transaction 1 undo block header and 1 undo block.  X$bh was also showing one more block in STATE 3, which I felt, is not reported in the consistent gets. That is the block read from the data file with the uncommitted transaction. Repeated the test with one more transaction, and got 7 consistent gets.


SQL>  select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 ABC

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

7  consistent gets

0  physical reads

108  redo size

588  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1   rows processed

In nutshell, how many maximum consistent gets can happen just to read one row from a table got 1 block with 50 different rows in the block?  In theory, there can be 50 different transactions can happen to the block concurrently.  So, the maximum number consistent get can be 2n+1 where n is the number of rows in the block.  So, there can be up to 101 consistent gets to read one row!

Elapsed Checkpoint Time

December 31, 2013 Leave a comment

Adieu to 2013 with this small post!  Happy new year to every one.

Oracle controls the incremental checkpoint frequency or interval based on the parameter FAST_START_MTTR_TARGET  or  LOG_CHECKPOINT_INTERVAL.  But, can you see the real checkpoint interval values?  What is the elapsed time between checkpoints?

Yes, you can.  The fixed table X$KCCDI holds SCN information about last checkpoint SCN and current SCN.  Column DICKP_SCN is the last checkpoint SCN and DICUR_SCN is the current SCN.

We get the CURRENT_SCN from v$database as,


SQL> select current_scn from v$database;

CURRENT_SCN

-----------

102707163

And looking at the X$KCCDI fixed tables, you can get the same SCN values.


SQL> select DICKP_SCN,DICUR_SCN from X$KCCDI;

DICKP_SCN        DICUR_SCN

---------------- ----------------

102711050        102711285

SQL> select current_scn from v$database;

CURRENT_SCN

-----------

102711289

Using the SCN_TO_TIMESTAMP function, you can convert the SCN values to time stamps.


SQL> select scn_to_timestamp(DICKP_SCN), scn_to_timestamp(DICUR_SCN) from X$KCCDI;

SCN_TO_TIMESTAMP(DICKP_SCN)                                                 SCN_TO_TIMESTAMP(DICUR_SCN)

--------------------------------------------------------------------------- ---------------------------------------------------------------------------

31-DEC-13 01.18.54.000000000 AM                                             31-DEC-13 01.31.31.000000000 AM

Now, it is easy to find the elapsed time of last SCN.


SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:1:51.000000000

The checkpoint elapsed time is depends on the load on the database.    When the database is idle, it took 3 minutes between the checkpoints – but that is not a hard limit.


SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:03:00.000000000

SQL>  select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:00.000000000

But the elapsed time went down to few seconds when I increased the load.


SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:20.000000000

SQL>  select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:00.000000000

 

SQL>  select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:03.000000000

SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:00.000000000

And a big good bye to 2013!

log_archive_dest_?? is ISPDB_MODIFIABLE?

December 20, 2013 Leave a comment

In Oracle 12c, some of the parameters can be changed in a pluggable database if the impact is limited to the current container.   However, the global parameters like SGA_TARGET can be set only through root container – CDB$ROOT.    A new column ISPDB_MODIFIABLE is added  in V$PARAMETER view to list the parameters modifiable from a PDB.


SQL> select count(*) from  v$parameter where ISPDB_MODIFIABLE='TRUE';

COUNT(*)

----------

171

There are 171 parameters listed to be modifiable from a PDB. Looking at the parameters closely, I was interested in the flowing parameters.


SQL> select name from v$parameter where name like '%log_archive%'  and ISPDB_MODIFIABLE='TRUE';

NAME

--------------------------------------------------------------------------------

log_archive_dest_1

log_archive_dest_2

log_archive_dest_3

log_archive_dest_4

log_archive_dest_5

log_archive_dest_6

log_archive_dest_7

log_archive_dest_8

log_archive_dest_9

log_archive_dest_10

log_archive_dest_11

log_archive_dest_12

log_archive_dest_13

log_archive_dest_14

log_archive_dest_15

log_archive_dest_16

log_archive_dest_17

log_archive_dest_18

log_archive_dest_19

log_archive_dest_20

log_archive_dest_21

log_archive_dest_22

log_archive_dest_23

log_archive_dest_24

log_archive_dest_25

log_archive_dest_26

log_archive_dest_27

log_archive_dest_28

log_archive_dest_29

log_archive_dest_30

log_archive_dest_31

log_archive_dest_state_1

log_archive_dest_state_2

log_archive_dest_state_3

log_archive_dest_state_4

log_archive_dest_state_5

log_archive_dest_state_6

log_archive_dest_state_7

log_archive_dest_state_8

log_archive_dest_state_9

log_archive_dest_state_10

log_archive_dest_state_11

log_archive_dest_state_12

log_archive_dest_state_13

log_archive_dest_state_14

log_archive_dest_state_15

log_archive_dest_state_16

log_archive_dest_state_17

log_archive_dest_state_18

log_archive_dest_state_19

log_archive_dest_state_20

log_archive_dest_state_21

log_archive_dest_state_22

log_archive_dest_state_23

log_archive_dest_state_24

log_archive_dest_state_25

log_archive_dest_state_26

log_archive_dest_state_27

log_archive_dest_state_28

log_archive_dest_state_29

log_archive_dest_state_30

log_archive_dest_state_31

log_archive_min_succeed_dest

63 rows selected.

Above 63 parameters said to be modifiable from a PDB, but it is contrary to the definition – only a PDB affected parameter can be modified.   Above parameters are global parameters which will affect  in the stance level as oracle is sharing a single set of redo log files for all PDBs.    So, this is not making senesce that we can modify the parameter from a PDB.  I tried to change the log_archive_dest_state_1 to defer from a PDB, but failed.


SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> alter system set log_archive_dest_state_1=defer;

alter system set log_archive_dest_state_1=defer

*

ERROR at line 1:

ORA-02097: parameter cannot be modified because specified value is invalid

ORA-65040: operation not allowed from within a pluggable database

Then I tried to change the value for the parameter  log_archive_dest_1 from a PDB which got failed agaian.  And I was able to change from the root container.


SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> alter system set log_archive_dest_1= 'location=/n01/oradata1/12cDB/arch1' container=current ;

alter system set log_archive_dest_1= 'location=/n01/oradata1/12cDB/arch1' container=current

*

ERROR at line 1:

ORA-02097: parameter cannot be modified because specified value is invalid

ORA-65040: operation not allowed from within a pluggable database

SQL> conn / as sysdba

Connected.

SQL> alter system set log_archive_dest_1= 'location=/n01/oradata1/12cDB/arch1' scope=both;

System altered.

So, my assumption is correct – even though these parameters are listed as modifiable, you can’t really modify them.  Had a quick search in Oracle Support came up a bug note:  17266585 for wrongly listed above parameters are modifiable, they are not.

Categories: Oracle Tags: , ,

Memory flushing and PDBs

December 19, 2013 Leave a comment

In a multitenant database environment, PDBs are plugged into a CDB container sharing a single SGA.  All the PDB objects are loaded into a single SGA component and there are no ISPDB_MODIFIABLE SGA parameters.  Looking at the V$LATCH_CHILDREN, all the CBC latches are defined under CON_ID  1 which is CDB$ROOT.


SQL> show CON_ID

CON_ID

------------------------------

1

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> select con_id ,count(*) from v$latch_children where name='cache buffers chains' group by con_id;

CON_ID   COUNT(*)

---------- ----------

0    16384

1

 

And from a PDB, the CBC structures are still belongs to the ROOT container.

1

SQL> show con_id

CON_ID

------------------------------

3

SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL>  select con_id ,count(*) from v$latch_children where name='cache buffers chains' group by con_id;

CON_ID   COUNT(*)

---------- ----------

0      16384

Also, looking at the buffer  cache ‘working set’ , both the PDBs and CDB are sharing a single buffer cache working set.  From CDB


SQL> show con_id

CON_ID

------------------------------

1

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> select SET_ID,DBWR_NUM,CNUM_SET,POOL_ID from x$kcbwds;

SET_ID   DBWR_NUM   CNUM_SET    POOL_ID

---------- ---------- ---------- ----------

1          0          0          1

2          0          0          1

3          0          0          2

4          0          0          2

5          0      76713          3

6          0      76713          3

7          0          0          4

8          0          0          4

9          0          0          5

10          0          0          5

11          0          0          6

12          0          0          6

13          0          0          7

14          0          0          7

15          0          0          8

16          0          0          8

16 rows selected.

And the PDB  also pointing to the same working set.


SQL> show con_id

CON_ID

------------------------------

3

SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> select SET_ID,DBWR_NUM,CNUM_SET,POOL_ID from x$kcbwds;

SET_ID   DBWR_NUM   CNUM_SET    POOL_ID

---------- ---------- ---------- ----------

1          0          0          1

2          0          0          1

3          0          0          2

4          0          0          2

5          0      76713          3

6          0      76713          3

7          0          0          4

8          0          0          4

9          0          0          5

10          0          0          5

11          0          0          6

12          0          0          6

13          0          0          7

14          0          0          7

15          0          0          8

16          0          0          8

16 rows selected.

 

So, what will happen if we flush buffer cache from any container?  Since all the containers are sharing the same SGA, I was expecting oracle will flush out all the blocks from buffer cache despite of which PDB /CDB it is coming from.  But, I was totally wrong.  Oracle was able to identify the ownership and flushed out only the blocks belongs that specific container.

Both PDB and CDB shows no cached objects from the schema THOMAS.


SQL> show con_name

 

CON_NAME

------------------------------

SALESDB

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0;

&nbsp;

no rows selected

And from the CDB


SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0;

&nbsp;

no rows selected

After selecting from the object, I can see the blocks are cached in the PDB.


SQL> select object_name from thomas.test where object_id=456;

OBJECT_NAME

------------------------------

STMT_AUDIT_OPTION_MAP

&nbsp;

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0;

&nbsp;

CON_ID OBJECT_NAME                    OWNER                               STATE

---------- ------------------------------ ------------------------------ ----------

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

 

Now I flushed the buffer cache from the CDB$ROOT expecting it will flush out all the blocks from the buffer cache including PDBs.


SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> !date

Thu Dec 19 11:53:50 UTC 2013

SQL> alter system flush buffer_cache;

System altered.

But contrary to my expectation, I was able to find the blocks cached in the buffer cache.


SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> !date

Thu Dec 19 11:54:08 UTC 2013

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0; SQL> SQL> SQL>   2    3

CON_ID OBJECT_NAME                    OWNER                               STATE

---------- ------------------------------ ------------------------------ ----------

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

Even though Oracle is using a single buffer cache for PDBs and CDB, still it is able to manage flushing caches and able to flush out only caches belongs to the current container.

Categories: Oracle Tags: , , ,

IMU and CR Reads

October 2, 2013 Leave a comment

IMU and Private strands are interesting new entries in Oracle 10g.  These to private buffer to a transaction will help to avoid many expensive latches both in buffer cache and log buffer.  There are tons of data available in the net about these subjects and many experts spoke about that.  In theory during a transaction the redo will place in private redo strand and undo change vector will be placed in IMU.  I going to demonstrate, what will happen if any other session do a CR read when the transaction is in progress?

Typically any single block CR read will end up 3 consistent gets –

1) Touch the ‘CURRENT’ block

2) Read the undo

3) CR block

My test table is a single row – column table.  From Session 1, did an UPDATE transaction


SQL>  desc abc

Name                                      Null?    Type

----------------------------------------- -------- ----------------------------

C1                                                 NUMBER

SQL> update abc set c1=3;

1 row updated.

From session 2, fired a simple SELECT forcing a CR read and Oracle returned the results under 2 CR reads – not 3!


SQL> select * from abc where rowid='AAAP00AAFAAAR0VAAA';

C1

----------

1

Execution Plan

----------------------------------------------------------

Plan hash value: 1340935522

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |     3 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| ABC  |     1 |     3 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

2  consistent gets  => 2 CR gets

0  physical reads

0  redo size

520  bytes sent via SQL*Net to client

519  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)
rows processed

And after some time, when re-run the SELECT from the second session, Oracle used 3 consistent gets to return the results.


SQL>  select * from abc where rowid='AAAP00AAFAAAR0VAAA'

C1

----------

1

Execution Plan

----------------------------------------------------------

Plan hash value: 1340935522

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |     3 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| ABC  |     1 |     3 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

3  consistent gets  => 3 CR gets

0  physical reads

108  redo size

520  bytes sent via SQL*Net to client

519  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1    rows processed

It is IMU helping Oracle to avoid undo block read for the first SELECT.  There could be two possibilities for 2 consistent get; I am not sure what is the correct one.

  • Oracle make a CR copy of the CURRENT block and use IMU to rollback the transaction
  • The CURRENT mode block was never got the new update; rather it will get updated only when the IMU flush happen. So, there is no need to rollback the transaction, rather just copy the CURRENT block contents to the CR block.

V$MYSTAT tells me what Oracle is doing.  During 2 consistent gets, Oracle is bypassing the undo block read and using IMU to build the CR block using either of the above method and recorded this statistics under “IMU CR rollbacks”.


SQL> select b.name, a.value from v$mystat a, v$statname b

where a.statistic#=b.statistic# and b.name like '%IMU%'

NAME                                                                  VALUE

---------------------------------------------------------------- ----------

IMU commits                                                               0

IMU Flushes                                                               0

IMU contention                                                            0

IMU recursive-transaction flush                                           0

IMU undo retention flush                                                  0

IMU ktichg flush                                                          0

IMU bind flushes                                                          0

IMU mbu flush                                                             0

IMU pool not allocated                                                    0

IMU CR rollbacks                                                          1 => IMU rollbacks

IMU undo allocation size                                                  0

IMU Redo allocation size                                                  0

IMU- failed to get a private strand                                       0

&nbsp;

13 rows selected.

And you can see an active IMU in buffer in x$ktifp table and the column KTIFPTXFLG tell us the status.  If the status is 1, then the transaction is active in the IMU and if the status is 7, change vector for undo is already flushed out of IMU and clubbed with the private redo strands and placed in the public redo strand.  As long as the IMU status is 1, Oracle was able to avoid UNDO block read for a CR block build and once the status is 7, Oracle need to read the UNDO blocks which will lead to 3 consistent gets.  I believe, the incremental checkpoint or a time out forcing the IMU flush for an uncommited transaction.


SQL>  select KTIFPTXFLG,ADDR,KTIFPNO, KTIFPSTA, KTIFPXCB xctaddr,

2  to_number(KTIFPUPE, 'XXXXXXXXXXXXXXXX')-

3  to_number(KTIFPUPB, 'XXXXXXXXXXXXXXXX') ubsize,

4  (to_number(KTIFPUPB, 'XXXXXXXXXXXXXXXX')-  to_number(KTIFPUPC, 'XXXXXXXXXXXXXXXX'))*-1 ubusage,

5  to_number(KTIFPRPE, 'XXXXXXXXXXXXXXXX')- to_number(KTIFPRPB, 'XXXXXXXXXXXXXXXX') rbsize,

6  (to_number(KTIFPRPB, 'XXXXXXXXXXXXXXXX')-to_number(KTIFPRPC, 'XXXXXXXXXXXXXXXX'))*-1 rbusage,

7  KTIFPPSI,KTIFPRBS,KTIFPTCN

8* from x$ktifp

KTIFPTXFLG ADDR                KTIFPNO   KTIFPSTA XCTADDR              UBSIZE    UBUSAGE     RBSIZE    RBUSAGE   KTIFPPSI   KTIFPRBS   KTIFPTCN

---------- ---------------- ---------- ---------- ---------------- ---------- ---------- ---------- ---------- ---------- ---------- ----------

7 00007F1C37989BD0          0          0 00                    64000          0          0          0      65535 2147483647      15287

1 00007F1C37989BD0          1          0 000000011BF88848      64000       1384     126464       1300          3         34        856

7 00007F1C37989BD0          2          0 00                    64000          0          0          0      65535 2147483647        257

7 00007F1C37989BD0          3          0 00                    64000          0          0          0      65535 2147483647         80

7 00007F1C37989BD0          4          0 00                    64000          0          0          0      65535 2147483647        285

7 00007F1C37989BD0          5          0 00                    64000          0          0          0      65535 2147483647       1140

7 00007F1C37989BD0          6          0 00                    64000          0          0          0      65535 2147483647        733

7 00007F1C37989BD0          7          0 00                    64000          0          0          0      65535 2147483647         20

6 00007F1C37989BD0          8          0 00                    64000          0          0          0      65535 2147483647     126805

0 00007F1C37989BD0          9          0 00                    64000          0          0          0      65535 2147483647          0

0 00007F1C37989BD0         10          0 00                    64000          0          0          0      65535 2147483647          0

Either the background process or a reading session can trigger an IMU flush.  Once the IMU flush is done, the status inx$ktifp will updated as 7 and further CR reads will use UNDO blocks even though the transaction is not committed.


KTIFPTXFLG ADDR                KTIFPNO   KTIFPSTA XCTADDR              UBSIZE    UBUSAGE     RBSIZE    RBUSAGE   KTIFPPSI   KTIFPRBS   KTIFPTCN

---------- ---------------- ---------- ---------- ---------------- ---------- ---------- ---------- ---------- ---------- ---------- ----------

7 00007F1C37988B98          0          0 00                    64000          0          0          0      65535 2147483647      15303

7 00007F1C37988B98          1          1 000000011BF88848      64000       1384     126464       1300          3         37        859

7 00007F1C37988B98          2          0 00                    64000          0          0          0      65535 2147483647        257

7 00007F1C37988B98          3          0 00                    64000          0          0          0      65535 2147483647         80

7 00007F1C37988B98          4          0 00                    64000          0          0          0      65535 2147483647        285

7 00007F1C37988B98          5          0 00                    64000          0          0          0      65535 2147483647       1140

7 00007F1C37988B98          6          0 00                    64000          0          0          0      65535 2147483647        733

7 00007F1C37988B98          7          0 00                    64000          0          0          0      65535 2147483647         20

6 00007F1C37988B98          8          0 00                    64000          0          0          0      65535 2147483647     126960

0 00007F1C37988B98          9          0 00                    64000          0          0          0      65535 2147483647          0

0 00007F1C37988B98         10          0 00                    64000          0          0          0      65535 2147483647          0

0 00007F1C37988B98         11          0 00                    64000          0          0          0      65535 2147483647          0

0 00007F1C37988B98         12          0 00                    64000          0          0          0      65535 2147483647          0

At the end I did not get answer for a question.  During the IMU CR read, Oracle worked normally.


SQL> select * from abc where rowid='AAAP00AAFAAAR0VAAA' ;

C1

----------

1

Execution Plan

----------------------------------------------------------

Plan hash value: 1340935522

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |     3 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| ABC  |     1 |     3 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

1  recursive calls

0  db block gets

2  consistent gets

0  physical reads

0  redo size   => No redo

520  bytes sent via SQL*Net to client

519  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1  rows processed

But, after flushing IMU, Oracle started generating 108 bytes redo for the CR read from the second session – not just once rather for every CR read.


SQL>  select * from abc where rowid='AAAP00AAFAAAR0VAAA'

C1

----------

1

Execution Plan

----------------------------------------------------------

Plan hash value: 1340935522

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |     3 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| ABC  |     1 |     3 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

3  consistent gets

0  physical reads

108  redo size  => 108 bytes redo

520  bytes sent via SQL*Net to client

519  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1    rows processed

May be, I need to ask this question to JL!

Thanks to Anita Gupta for helping me to research for this post.

ABMR – Automatic Block Media Recovery

September 7, 2013 1 comment

ABMR is a cool feature; most of us know about it, so instead of talking about ABMR, want to share the issues I faced.

First one was, Oracle did not bother to start the abmr background process and attempt block recovery.   Alert log exclaimed about the block corruption and did nothing.


Fri Sep 06 19:27:30 2013

Hex dump of (file 5, block 1028) in trace file /n01/oraadmin1/diag/rdbms/oracld/oracld/trace/oracld_ora_30109.trc

Corrupt block relative dba: 0x01400404 (file 5, block 1028)

Completely zero block found during multiblock buffer read

Reading datafile '/n01/oradata1/oracld/data01.dbf' for corruption at rdba: 0x01400404 (file 5, block 1028)

Reread (file 5, block 1028) found same corrupt data (no logical check)

Fri Sep 06 19:27:30 2013

Corrupt Block Found

TSN = 5, TSNAME = DATA

RFN = 5, BLK = 1028, RDBA = 20972548

OBJN = 62328, OBJD = 62328, OBJECT = TEST, SUBOBJECT =

SEGMENT OWNER = THOMAS, SEGMENT TYPE = Table Segment

Errors in file /n01/oraadmin1/diag/rdbms/oracld/oracld/trace/oracld_ora_30109.trc  (incident=41012):

ORA-01578: ORACLE data block corrupted (file # 5, block # 1028)

ORA-01110: data file 5: '/n01/oradata1/oracld/data01.dbf'

Incident details in: /n01/oraadmin1/diag/rdbms/oracld/oracld/incident/incdir_41012/oracld_ora_30109_i41012.trc

It was my bad,  the pre-requisite for ABMR is, as per Oracle documentation

A physical standby database operating in real-time query mode can be used to repair corrupt data blocks in a primary database. If possible, any corrupt data block encountered when a primary database is accessed will be automatically replaced with an uncorrupted copy of that block from a physical standby database operating in real-time query mode. An ORA-1578 error is returned when automatic repair is not possible.

http://docs.oracle.com/cd/E11882_01/server.112/e17157/unplanned.htm#BABCECIC

So, the dataguard should be HIGH AVAILABILITY protection mode and operating in real time query mode.   That resolved my first issue.

HIGH AVAILABILITY mode, ABMR process came up, but plainly said, I do not see  a standby, while the standby DB was in sync.


Hex dump of (file 5, block 1028) in trace file /n01/oraadmin1/diag/rdbms/oracld/oracld/trace/oracld_ora_30391.trc

Corrupt block relative dba: 0x01400404 (file 5, block 1028)

Completely zero block found during multiblock buffer read

Reading datafile '/n01/oradata1/oracld/data01.dbf' for corruption at rdba: 0x01400404 (file 5, block 1028)

Reread (file 5, block 1028) found same corrupt data (no logical check)

Starting background process ABMR

Fri Sep 06 19:42:10 2013

ABMR started with pid=27, OS id=30393

Automatic block media recovery service is active.

Automatic block media recovery requested for (file# 5, block# 1028)

Fri Sep 06 19:42:11 2013

Errors in file /n01/oraadmin1/diag/rdbms/oracld/oracld/trace/oracld_bmr0_30395.trc:

ORA-17627: ORA-01017: invalid username/password; logon denied

ORA-17629: Cannot connect to the remote database server

Automatic block media recovery failed for (file# 5, block# 1028)

[No standby available]

Automatic block media recovery requested for (file# 5, block# 1028)

Fri Sep 06 19:42:11 2013

Corrupt Block Found

TSN = 5, TSNAME = DATA

RFN = 5, BLK = 1028, RDBA = 20972548

OBJN = 62328, OBJD = 62328, OBJECT = TEST, SUBOBJECT =

SEGMENT OWNER = THOMAS, SEGMENT TYPE = Table Segment

Fri Sep 06 19:43:11 2013

Automatic block media recovery failed for (file# 5, block# 1028)

[No response received]

Oracle is able to ship the log files and telling me not able login to the standby DB with an error message “ORA-17627: ORA-01017: invalid username/password; logon denied”.  I was fortunate enough to find a note from and found I am hitting an unpublished bug: 6804839

http://jhdba.wordpress.com/2008/05/12/more-problems-with-orapwd-file-and-shipping-redo-logs/

For a workaround, I have re-created the password file with the option : ignorecase=y

That resolved my  “ORA-01017: invalid username/password; logon denied” and landed in another issue.  That was more clueless one –


Fri Sep 06 20:02:01 2013

ABMR started with pid=28, OS id=30883

Automatic block media recovery service is active.

Automatic block media recovery requested for (file# 5, block# 1028)

Fri Sep 06 20:03:01 2013

Automatic block media recovery failed for (file# 5, block# 1028)

[Network call timed-out]

Automatic block media recovery requested for (file# 5, block# 1028)

Fri Sep 06 20:03:01 2013

Corrupt Block Found

TSN = 5, TSNAME = DATA

RFN = 5, BLK = 1028, RDBA = 20972548

OBJN = 62328, OBJD = 62328, OBJECT = TEST, SUBOBJECT =

SEGMENT OWNER = THOMAS, SEGMENT TYPE = Table Segment

Fri Sep 06 20:04:01 2013

Automatic block media recovery failed for (file# 5, block# 1028)

[Network call timed-out]

1

My Primary and Standby database – both are running in the same server, but failing with Network call timed-out.  But one more entry from alert log gave me some more information, LOG_ARCHIVE_DEST_2  is not really SYNCHORNISED with RFS processes.

1

LGWR: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (16086)

LGWR: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned

Error 16086 for archive log file 3 to 'oraclds'

LGWR: Failed to archive log 3 thread 1 sequence 78 (16086)

The reason I got it  from standby alert log as


Media Recovery Log /n01/oradata1/archs/1_200_819972215.dbf

Media Recovery Waiting for thread 1 sequence 201

Sat Sep 07 08:52:07 2013

Primary database is in MAXIMUM AVAILABILITY mode

Standby controlfile consistent with primary

Standby controlfile consistent with primary

RFS[262]: Assigned to RFS process 4208

RFS[262]: No standby redo logfiles created for thread 1

RFS[262]: No standby redo logfiles selected (reason:7)

Oops, by mistake I have created standby log files not under thread 1 but for thread 2.  Fixed that issue, and I was fortunate enough to see


Hex dump of (file 5, block 1028) in trace file /n01/oraadmin1/diag/rdbms/oracld/oracld/trace/oracld_ora_4628.trc

Corrupt block relative dba: 0x01400404 (file 5, block 1028)

Completely zero block found during multiblock buffer read

Reading datafile '/n01/oradata1/oracld/data01.dbf' for corruption at rdba: 0x01400404 (file 5, block 1028)

Reread (file 5, block 1028) found same corrupt data (no logical check)

Starting background process ABMR

Sat Sep 07 09:08:57 2013

ABMR started with pid=28, OS id=4630

Automatic block media recovery service is active.

Automatic block media recovery requested for (file# 5, block# 1028)

Sat Sep 07 09:08:57 2013

Automatic block media recovery successful for (file# 5, block# 1028)

Automatic block media recovery successful for (file# 5, block# 1028)

I thought to document this – hope someone will benefit!

Categories: Oracle Tags: ,

Soft parse and session_cached_cursors

August 22, 2013 Leave a comment

session_cached_cursors is one of the neglected parameter in oracle environment which specifies the number cursor cached in the PGA of a session.  Any SQL executed multiple times will be cached and executed with a softer soft parsing.  There can be 3 scenarios for any parse request.

SQL parsing first time:  No shared cursor available in SGA.  Optimizer request for a hard parse and execute the SQL

Shared cursor is available but not cached in Private SQL Area No hard parse is required.  Under a ‘Library cache pin’ latch  and ‘cursor: mutex’, the shared plan will be copied to the private SQL area and execute the cursor  - call it a soft parse.

Cursor is available both in SGA and Private SQL area: Just bind and execute the SQL, no latch is required.

What makes the cursor stay in the Private SQL area?  It is restricted with the parameter session_cached_cursors which defaults to 50 from Oracle 11g onwards.  If the number of repeated cursor goes above 50, the cached cursors will be flushed out from the private SQL area to make room for the new incoming cursor.  Even though this is a soft parse, but the it can cause problem on high concurrency and lead into performance issues – just because you have set the session_cached_cursors to a low value.

My test table is just one row table from DBA_OBJECTS.  Since I am running only 5 SQLs repeated, set the  session_cached_cursors value to 3.


SQL> create table b as select * from dba_objects  where rownum < 2;

Table created.

SQL> show parameter session_cached_cursors

NAME                                 TYPE                             VALUE

------------------------------------ -------------------------------- ------------------------------

session_cached_cursors               integer                          3

SQL> show parameter  cursor_sharing

NAME                                 TYPE                             VALUE

------------------------------------ -------------------------------- ------------------------------

cursor_sharing                       string                           FORCE

My test script SELECT this one row table into a local variable.


declare

l_cnt number;

l_owner varchar2(50);

l_object_name varchar2(50);

l_status varchar2(5);

l_DATA_OBJECT_ID number;

begin

for i in 1 ..100000

loop

select object_id into l_cnt from b;

select owner into l_owner from b ;

select object_name into l_object_name from b ;

select status into l_status  from b ;

select DATA_OBJECT_ID into l_DATA_OBJECT_ID from b;

end loop;

end;

/

Exit

Running the above code in a single user session did not create much issues, still the total parse time was more than the execute time with no major waits.


SELECT OBJECT_ID

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse    99999      1.11       1.03          0          0          0           0

Execute  99999      0.86       0.83          0          0          0           0

Fetch    99999      1.47       1.44          0     299997          0       99999

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   299997      3.46       3.31          0     299997          0       99999

Rows     Execution Plan

-------  ---------------------------------------------------

0  SELECT STATEMENT   MODE: ALL_ROWS

1   TABLE ACCESS (FULL) OF 'B' (TABLE)

********************************************************************************

SQL ID: 65fn676fmwz4n Plan Hash: 1911541843

&nbsp;

SELECT OBJECT_NAME

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse   100000      1.08       1.06          0          0          0           0

Execute 100000      0.80       0.82          0          0          0           0

Fetch   100000      1.46       1.46          0     300000          0      100000

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   300000      3.35       3.35          0     300000          0      100000

Because I set the session_cached_cursors set to 3, and the number SQL statements run in my session was 5, force to flush out the cached cursor each time to make room for the new statement.  Because of this reason session cache hit was never happened, ended up a soft parse.  It took 1.06 seconds to parse and 0.82 seconds to execute.  There were no wait events.

I tried the same test concurrently with another 10 sessions running the same statement.  The results very high elapsed time on parsing because of the soft parsing under 10 users concurrency.


SELECT OBJECT_ID

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse    99999      1.45       6.01          0          0          0           0

Execute  99999      0.88       1.73          0          0          0           0

Fetch    99999      1.49       3.50          0     299997          0       99999

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   299997      3.83      11.26          0     299997          0       99999

Elapsed times include waiting on following events:

Event waited on                             Times   Max. Wait  Total Waited

----------------------------------------   Waited  ----------  ------------

cursor: pin S                                  99        0.02          0.72

cursor: mutex S                                38        0.02          0.38

latch: cache buffers chains                    33        0.03          0.24

cursor: mutex X                              3228        0.02          0.96

library cache: mutex X                         13        0.01          0.12

********************************************************************************

The first SQL took 6.01 seconds for parsing and 1.73 seconds for execution.  It is 6.67 times more than previous run in isolation.  This is just with 10 users, and when the number of users increasing, the elapse time will also increase.  Just for comparison I will take another of SQL.


SELECT OWNER

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse   100000      1.47       5.68          0          0          0           0

Execute 100000      0.86       1.90          0          0          0           0

Fetch   100000      1.63       3.73          0     300000          0      100000

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   300000      3.97      11.32          0     300000          0      100000

Rows     Execution Plan

-------  ---------------------------------------------------

0  SELECT STATEMENT   MODE: ALL_ROWS

1   TABLE ACCESS (FULL) OF 'B' (TABLE)

Elapsed times include waiting on following events:

Event waited on                             Times   Max. Wait  Total Waited

----------------------------------------   Waited  ----------  ------------

cursor: mutex S                                35        0.02          0.28

cursor: mutex X                              4348        0.02          0.95

latch: cache buffers chains                    59        0.01          0.37

cursor: pin S                                  76        0.01          0.62

library cache: mutex X                         13        0.02          0.16

********************************************************************************

The increase in elapsed is just because of, the session is not able to cache cursor in their private memory for re-use.  They need to do a soft parse each time.  So, when ever concurrency is increasing, a soft parse is not really soft!  It will kill the application scalability.

I have repeated the test after setting the session_cached_cursors value to 10 with the same way.  The first run in isolation results were

SQL> show parameter session_cached_cursors

NAME TYPE VALUE
------------------------------------ -------------------------------- ------------------------------
session_cached_cursors integer 10

 


SELECT OBJECT_ID

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.00       0.00          1          1          0           0

Execute 100000      0.84       0.84          0          0          0           0

Fetch   100000      1.31       1.33          0     300000          0      100000

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   200001      2.16       2.18          1     300001          0      100000

Elapsed times include waiting on following events:

Event waited on                             Times   Max. Wait  Total Waited

----------------------------------------   Waited  ----------  ------------

db file sequential read                         1        0.00          0.00

********************************************************************************

SELECT OWNER

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.00       0.00          0          1          0           0

Execute 100000      0.83       0.82          0          0          0           0

Fetch   100000      1.35       1.34          0     300000          0      100000

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   200001      2.19       2.16          0     300001          0      100000

It parsed just once and executed 100000 times!  Elapsed time for parsing les than 0.01 second!!  As we are running 5 SQL with session cached cursor value 10, help us to re-use the cached cursors and avoided expensive ‘soft’ parse.

Repeated the same test under 10 user concurrency.


SELECT OBJECT_ID

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.00       0.00          0          0          0           0

Execute 100000      0.88       2.05          0          0          0           0

Fetch   100000      1.36       3.88          0     300000          0      100000

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   200001      2.24       5.94          0     300000          0      100000

Wow, even the 10 user concurrency is not affecting the overall execution as the number parsing is just one.  All the concurrent lat waits were disappeared


Rows     Execution Plan

-------  ---------------------------------------------------

0  SELECT STATEMENT   MODE: ALL_ROWS

1   TABLE ACCESS (FULL) OF 'B' (TABLE)

Elapsed times include waiting on following events:

Event waited on                             Times   Max. Wait  Total Waited

----------------------------------------   Waited  ----------  ------------

latch: cache buffers chains                    70        0.02          0.45

********************************************************************************

10 X 100000 concurrent execution resulted minor CBC latch waits which outside scope of this discussion.   One more sample SQL for comparison.


SELECT OWNER

FROM

B

call     count       cpu    elapsed       disk      query    current        rows

------- ------  -------- ---------- ---------- ---------- ----------  ----------

Parse        1      0.00       0.00          0          0          0           0

Execute 100000      0.80       2.35          0          0          0           0

Fetch   100000      1.37       4.18          0     300000          0      100000

------- ------  -------- ---------- ---------- ---------- ----------  ----------

total   200001      2.17       6.53          0     300000          0      100000

Rows     Execution Plan

-------  ---------------------------------------------------

0  SELECT STATEMENT   MODE: ALL_ROWS

1   TABLE ACCESS (FULL) OF 'B' (TABLE)

Elapsed times include waiting on following events:

Event waited on                             Times   Max. Wait  Total Waited

----------------------------------------   Waited  ----------  ------------

latch: cache buffers chains                    64        0.02          0.44

session_cached_cursors plays an important role avoiding soft parses which maintain the scalability of the application.  Just setting this parameter to a right value sometimes  resolve  your concurrency related performance issues.

Follow

Get every new post delivered to your Inbox.