enq: SQ – contention

January 9, 2015 Leave a comment

Yesterday, I have seen huge waits “enq SQ – contention’”  – in every snapshot there were thousands of waits.  But the fix was so simple!  Here is the root cause of the issue –

When you select from a sequence, the NEXTVAL generated from a the seq$ table if it is not cached.  If it is cached, it will be available in a memory structure and no need to generate the value which is a costly affair under the wait enq SQ – contention.  Below test shows what will happen for this wait from NOCACHE to CACHE 1000 NEXTVALs.

I have executed the below SQL from 20 concurrent sessions..

<pre>
declare

 col1 number;

begin

 for a in 1..20000 loop

  select s.nextval into col1 from dual;

 end loop;

end;

/

Used below statements to run this SQL from 20 different sessions.

<pre>nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

nohup sqlplus abc/abc @s &

With NOCACHE


SQL> create sequence s;

select count(*),sum(time_waited)

from v$active_session_history

where event='enq: SQ - contention' and

to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (14,15,16,17,18,19,20);


  COUNT(*) SUM(TIME_WAITED)

---------- ----------------

      2336       1677503813

 

With CACHE 50

 


SQL> alter sequence s cache 50;

Sequence altered.


SQL> select count(*),sum(time_waited)

from v$active_session_history

where event='enq: SQ - contention' and

to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (21,22,23,24,25,26,27);



  COUNT(*) SUM(TIME_WAITED)

---------- ----------------

       701        459026341

 

With CACHE 500


SQL> alter sequence s cache 500;



Sequence altered.



SQL>

select count(*),sum(time_waited)

from v$active_session_history

where event='enq: SQ - contention' and

to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (23,24,25,26,27);



  COUNT(*) SUM(TIME_WAITED)

---------- ----------------

       192         96392664

 

 

With CACHE 1000


SQL>  alter sequence s cache 1000 ;



Sequence altered.



SQL> select count(*),sum(time_waited)

from v$active_session_history

where event='enq: SQ - contention' and

to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (29,30,31,32,33,34,35,36,37);



  COUNT(*) SUM(TIME_WAITED)

---------- ----------------

        84         23352139

 

 

With huge concurrency, the number of waits comes down from 2336 (time:1677503813 ) to 84 (time: 23352139)  by adjusting the sequence CACHE.

 

Version:

 


SQL> select * from v$version;



BANNER

--------------------------------------------------------------------------------

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

PL/SQL Release 11.2.0.1.0 - Production

CORE    11.2.0.1.0      Production

TNS for Solaris: Version 11.2.0.1.0 - Production

NLSRTL Version 11.2.0.1.0 - Production



Advertisements

Autotrace

January 6, 2015 Leave a comment

Happy New Year!

Oracle supplied various tools to trace the SQL and identify execution plans for the SQLs.  10046 Event, AUTOTRACE, DBMS.XPLAN etc are some of the most used tracing methods in DBAs daily life.  Sometimes, we need to be very careful while using these tools specially using bind variables.  Following are some test cases where wrong plans reported by the above tracing tools.

Most easiest method to get the execution plan is AUTOTRACE both estimate and actual after execution it.  For example I have below tables ABC


SQL> desc abc

 Name                                                                                      Null?    Type

 ----------------------------------------------------------------------------------------- -------- ------------------------------------------------------------

 OWNER                                                                                     NOT NULL VARCHAR2(30)

 OBJECT_NAME                                                                               NOT NULL VARCHAR2(30)

 SUBOBJECT_NAME                                                                                     VARCHAR2(30)

 OBJECT_ID                                                                                          NUMBER

 DATA_OBJECT_ID                                                                                     NUMBER

 OBJECT_TYPE                                                                                        VARCHAR2(19)

 CREATED                                                                                   NOT NULL DATE

 LAST_DDL_TIME                                                                             NOT NULL DATE

 TIMESTAMP                                                                                          VARCHAR2(19)

 STATUS                                                                                             VARCHAR2(7)

 TEMPORARY                                                                                          VARCHAR2(1)

 GENERATED                                                                                          VARCHAR2(1)

 SECONDARY                                                                                          VARCHAR2(1)

 NAMESPACE                                                                                 NOT NULL NUMBER

 EDITION_NAME                                                                                       VARCHAR2(30)



SQL> select OBJECT_ID,count(*) from abc group by OBJECT_ID;

 OBJECT_ID   COUNT(*)

---------- ----------

         1          1

         2     117455

ABC table got 1 row OBJECT_ID=1 and 117455 rows with OBJECT_ID=2.  Created an index OBJECT_ID, so that OBEJCT_ID=1 will always use and index while OBJECT_ID=2 a FTS.

My first test was using literals, and AUTOTRACE reported correct execution plans.


SQL>  select count(OBJECT_TYPE) from abc where object_id=2;

COUNT(OBJECT_TYPE)

------------------

            117455

Elapsed: 00:00:00.01

Execution Plan

---------------------------------------------------------

Plan hash value: 1045519631

---------------------------------------------------------------------------

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |      |     1 |    11 |   447   (1)| 00:00:06 |

|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |

|*  2 |   TABLE ACCESS FULL| ABC  |   118K|  1274K|   447   (1)| 00:00:06 |

---------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("OBJECT_ID"=2)

FTS for OBJECT_ID=2 and Index Range Scan for OBJECT_ID=1.


SQL> select count(OBJECT_TYPE) from abc where object_id=1;

COUNT(OBJECT_TYPE)

------------------

            1



Elapsed: 00:00:00.01

Execution Plan

----------------------------------------------------------

Plan hash value: 2627152604

--------------------------------------------------------------------------------------------

| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT             |             |     1 |    11 |     4   (0)| 00:00:01 |

|   1 |  SORT AGGREGATE              |             |     1 |    11 |            |          |

|   2 |   TABLE ACCESS BY INDEX ROWID| ABC         |     1 |    11 |     4   (0)| 00:00:01 |

|*  3 |    INDEX RANGE SCAN          | I_OBJECT_ID |     1 |       |     3   (0)| 00:00:01 |

--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

   3 - access("OBJECT_ID"=1)

Then I moved to the next level – get the AUTOTRACE plans using BIND Variables.


SQL>  set autotrace traceonly

SQL> VARIABLE oid NUMBER;

SQL> EXECUTE :oid  := 1;

SQL> select count(OBJECT_TYPE) from abc where object_id=:oid ;SQL>

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00

SQL>

Elapsed: 00:00:00.01



Execution Plan

----------------------------------------------------------

Plan hash value: 1045519631



---------------------------------------------------------------------------

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |      |     1 |    11 |   448   (1)| 00:00:06 |

|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |

|*  2 |   TABLE ACCESS FULL| ABC  |   118K|  1274K|   448   (1)| 00:00:06 |

---------------------------------------------------------------------------



Predicate Information (identified by operation id):

---------------------------------------------------

   2 - filter("OBJECT_ID"=TO_NUMBER(:OID))



Statistics

----------------------------------------------------------

          1  recursive calls

          0  db block gets

          4  consistent gets

          0  physical reads

          0  redo size

        536  bytes sent via SQL*Net to client

        520  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed

SQL> VARIABLE oid NUMBER;

SQL> EXECUTE :oid  := 2;

SQL> select count(OBJECT_TYPE) from abc where object_id=:oid ;SQL>

PL/SQL procedure successfully completed.



Elapsed: 00:00:00.00

SQL>



Elapsed: 00:00:00.04



Execution Plan

----------------------------------------------------------

Plan hash value: 1045519631

---------------------------------------------------------------------------

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |      |     1 |    11 |   448   (1)| 00:00:06 |

|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |

|*  2 |   TABLE ACCESS FULL| ABC  |   118K|  1274K|   448   (1)| 00:00:06 |

---------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------



   2 - filter("OBJECT_ID"=TO_NUMBER(:OID))



Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

       1964  consistent gets

          0  physical reads

          0  redo size

        538  bytes sent via SQL*Net to client

        520  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)</pre>
<ul>
	<li>rows processed</li>
</ul>
<pre>

The AUTOTRACE TRACEONLY reported both plans as FTS, though  OBJECT_ID-=1 is just reading one row from the table while OBJECT_ID is an indexed column.  I agree to the fact that, AUTOTRACE TRACEONLY is showing an estimated plan, so may be reporting the wrong plan.  Experts say that, AUTOTRACE TRACEONLY never a dependable way of plan generation.  Interestingly OBJECT_ID=1 reported just 4 consistent gets while OBJECT_ID=2 shows 1964 gets – so the estimation theory is not really going well here as I was expecting 1964 CR reads if the estimated plan is FTS.

Then moved to the next test – use AUTOTRACE EXPLAIN STAT.  I had some faith in this level as the SQL statement will get execute first and report the executed plan.  So the plan tends to be accurate.




SQL> set autotrace on exp stat

SQL> EXECUTE :oid  := 2;



PL/SQL procedure successfully completed.



Elapsed: 00:00:00.00

SQL> select count(OBJECT_TYPE) from abc where object_id=:oid ;



COUNT(OBJECT_TYPE)

------------------

            117455

Elapsed: 00:00:00.02

Execution Plan

----------------------------------------------------------

Plan hash value: 1045519631

---------------------------------------------------------------------------

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |      |     1 |    11 |   448   (1)| 00:00:06 |

|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |

|*  2 |   TABLE ACCESS FULL| ABC  |   118K|  1274K|   448   (1)| 00:00:06 |

---------------------------------------------------------------------------



Predicate Information (identified by operation id):

---------------------------------------------------



   2 - filter("OBJECT_ID"=TO_NUMBER(:OID))



Statistics

----------------------------------------------------------

          1  recursive calls

          0  db block gets

       1590  consistent gets

          0  physical reads

          0  redo size

        538  bytes sent via SQL*Net to client

        520  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)

          1  rows processed



SQL> EXECUTE :oid  := 1



PL/SQL procedure successfully completed.



Elapsed: 00:00:00.00

SQL> select count(OBJECT_TYPE) from abc where object_id=:oid ;



COUNT(OBJECT_TYPE)

------------------

                 1



Elapsed: 00:00:00.02



Execution Plan

----------------------------------------------------------

Plan hash value: 1045519631

---------------------------------------------------------------------------

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |      |     1 |    11 |   448   (1)| 00:00:06 |

|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |

|*  2 |   TABLE ACCESS FULL| ABC  |   118K|  1274K|   448   (1)| 00:00:06 |

---------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------



   2 - filter("OBJECT_ID"=TO_NUMBER(:OID))



Statistics

----------------------------------------------------------

          0  recursive calls

          0  db block gets

       1590  consistent gets

          0  physical reads

          0  redo size

        536  bytes sent via SQL*Net to client

        520  bytes received via SQL*Net from client

          2  SQL*Net roundtrips to/from client

          0  sorts (memory)

          0  sorts (disk)</pre>
<ul>
	<li>rows processed</li>
</ul>
<pre>

To my surprise, both plans reported FTS regardless of the selectivity.  10046 trace shows Oracle is doing index range scan for OBEJCT_ID=1.  Using BINDS, the plan may not report correctly in the AUTOTRACE as there may be multiple child cursors existing due to Bind Peeking.  So, using BIND values in AUTOTRACE is really not working at all when we have multiple child cursors.  I am not sure, why Oracle can’t fix this issue, as AUTOTRACE just need to report the last executed plan, not from the estimation.

Then I turn to DBMS_XPLAN, and the results were surprised me even more.  For OBJECT_ID=1 reported a FTS, as


SQL> select count(OBJECT_TYPE) from abc where object_id=:oid ;



COUNT(OBJECT_TYPE)

------------------

                 1



Elapsed: 00:00:00.01

SQL>  select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST '));



PLAN_TABLE_OUTPUT

-------------------------------------------------------------------------------------------

SQL_ID  99pd5nyyb4gsz, child number 1

-------------------------------------

select count(OBJECT_TYPE) from abc where object_id=:oid



Plan hash value: 1045519631



--------------------------------------------

| Id  | Operation          | Name | E-Rows |

--------------------------------------------

|   0 | SELECT STATEMENT   |      |        |

|   1 |  SORT AGGREGATE    |      |      1 |

|*  2 |   TABLE ACCESS FULL| ABC  |    118K|

--------------------------------------------



Predicate Information (identified by operation id):

---------------------------------------------------



   2 - filter("OBJECT_ID"=:OID)

 

I was not really expecting FTS using DBMS_XPLAN for OBJECT_ID=1, that was bit shocking.  Then I changed the option to ‘ALLSTATS LAST +PEEKED_BINDS’ from ‘ALLSTATS LAST ‘ and started reporting correct plans.


SQL>  select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST +PEEKED_BINDS'));



PLAN_TABLE_OUTPUT

------------------------------------------------------------------------------------------

SQL_ID  am229vwtp1ppk, child number 0

-------------------------------------

 select count(OBJECT_TYPE) from abc where object_id=:oid



Plan hash value: 2627152604



-------------------------------------------------------------

| Id  | Operation                    | Name        | E-Rows |

-------------------------------------------------------------

|   0 | SELECT STATEMENT             |             |        |

|   1 |  SORT AGGREGATE              |             |      1 |

|   2 |   TABLE ACCESS BY INDEX ROWID| ABC         |      1 |

|*  3 |    INDEX RANGE SCAN          | I_OBJECT_ID |      1 |

-------------------------------------------------------------



Peeked Binds (identified by position):

--------------------------------------



   1 - :SYS_B_0 (NUMBER): 1



Predicate Information (identified by operation id):

---------------------------------------------------


</pre>
<ul>
	<li>- access("OBJECT_ID"=:OID)</li>
</ul>
<pre>

So, be careful when you use AUTOTRACE or any tools when you use BIND variables.  Oracle should put some more efforts to make it more dependable.  Always use 10046 event and tkprof to get a real picture.


SQL> select * from v$version;



BANNER

--------------------------------------------------------------------------------

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

PL/SQL Release 11.2.0.4.0 - Production

CORE    11.2.0.4.0      Production

TNS for Linux: Version 11.2.0.4.0 - Production

NLSRTL Version 11.2.0.4.0 – Production



APPEND and PARALLEL

November 21, 2014 1 comment

APPEND hint in oracle is very useful, there are many benefits

Fragmented data loading – data will clustered above the HWM so that similar data will be together. Otherwise Oracle will place the records in blocks using PCT_FREE rule

  • No undo will be generated – in large direct path insert will free up large amount of data blocks
  • Less CBC latches (because of no UNDO blocks), less CPU usage
  • Readers need not undo the records, consistent reads are less costly – just read below HWM
  • Rollback will be faster – Oracle will just discard all the blocks above the HWM and it is instant.
  • Less REDO as UNDO blocks are not generated.  In case of NOARCHIVELOG mode, very limited REDO, only to manage the internal operations.  Both instance recovery and media recovery is out of scope from REDO.
  • No UNDO  – no infamous ORA-1555 Snapshoot too old error!

 

To achieve direct path loading- we typically use /*+ APPEND */ clause and that will not generate UNDO.

SQL> insert /*+ APPEND */ into t1 select * from t;
1803136 rows created.
Elapsed: 00:00:03.09

SQL> set linesize 180
SQL> col status for a8
SQL> col undousage for 999999999999.99
SQL>  SELECT b.INST_ID,a.sid, a.username, a.status,b.xidusn, b.used_urec, b.used_ublk,flag,sql_id,b.used_ublk*8/1024/1024 undousage
 FROM gv$session a, gv$transaction b
 WHERE a.saddr(+) = b.ses_addr and a.INST_ID(+) = b.INST_ID ;
   INST_ID        SID USERNAME                       STATUS       XIDUSN  USED_UREC  USED_UBLK       FLAG SQL_ID               UNDOUSAGE
---------- ---------- ------------------------------ -------- ---------- ---------- ---------- ---------- ------------- ----------------
         2       2299 TEST                           INACTIVE         15          1          1   16782851 djzyaz5uj6vnr              .00

We just got one UNDO  record which may be used for internal operation including extent management.  While a traditional load with same data generated 43599 undo records in 950 blocks which is 7600KB in size.

<pre>SQL> insert into t1 select * from t;
1803136 rows created.
Elapsed: 00:00:03.87

SQL> set linesize 180
SQL> col status for a8
SQL> col undousage for 999999999999.99
SQL>  SELECT b.INST_ID,a.sid, a.username, a.status,b.xidusn, b.used_urec, b.used_ublk,flag,sql_id,b.used_ublk*8/1024/1024 undousage
  2    FROM gv$session a, gv$transaction b
  3    WHERE a.saddr(+) = b.ses_addr and a.INST_ID(+) = b.INST_ID ;
   INST_ID        SID USERNAME                       STATUS       XIDUSN  USED_UREC  USED_UBLK       FLAG SQL_ID               UNDOUSAGE
---------- ---------- ------------------------------ -------- ---------- ---------- ---------- ---------- ------------- ----------------
         2       2299 TEST                           INACTIVE         12      43599        950       7683 9c7qck66f77c0              .01

 

Using a direct path load is always useful.  Oracle did a wonderful thing and is documented it says – When you use PARALLEL insert it will be direct path and to enable traditional load, you should explicitly use the hint NOAPPEND.

 

https://docs.oracle.com/cd/E11882_01/server.112/e41573/hintsref.htm#PFGRF50101

 

Well, may be true and it is false as well.  I am not sure what meant by DIRECT path load here.   All the above mentioned benefits of APPEND are getting while using PARALLEL hint? I thought it was true, till I test it.  It may be true that Oracle is placing all the blocks above the HWM, but is it reducing the UNDO.   One of the beauty of APPEND is reducing UNDO and if that is not achieved, a direct path load is less beneficial.

I ran the above test again in parallel –

<pre>SQL>  insert /*+ parallel (t1,5) */ into t1 select * from t;
1803136 rows created.
Elapsed: 00:00:02.75

SQL> set linesize 180
SQL> col status for a8
SQL> col undousage for 999999999999.99
SQL>  SELECT b.INST_ID,a.sid, a.username, a.status,b.xidusn, b.used_urec, b.used_ublk,flag,sql_id,b.used_ublk*8/1024/1024 undousage
  2    FROM gv$session a, gv$transaction b
  3    WHERE a.saddr(+) = b.ses_addr and a.INST_ID(+) = b.INST_ID ;
   INST_ID        SID USERNAME                       STATUS       XIDUSN  USED_UREC  USED_UBLK       FLAG SQL_ID               UNDOUSAGE
---------- ---------- ------------------------------ -------- ---------- ---------- ---------- ---------- ------------- ----------------
         2       2299 TEST                           INACTIVE         12      43743        952       7683 29hu6nmj0g66w              .01

 

To my surprise Oracle generated more UNDO records and blocks in a single thread.    So, Oracle is saying PARALLEL is APPENDing, but not all APPEND features are available in PARALLEL load.  Many times in the past I took a decision to use PARALLEL thinking it is real APPEND.   I re-tried the insert with both PARALLEL and APPEND – and it works as expected – but the it took more time to complete.

<pre>QL> insert /*+ parallel (t1,5) APPEND */ into t1 select * from t;
1803136 rows created.
Elapsed: 00:00:05.94

SQL> set linesize 180
SQL> col status for a8
SQL> col undousage for 999999999999.99
SQL>  SELECT b.INST_ID,a.sid, a.username, a.status,b.xidusn, b.used_urec, b.used_ublk,flag,sql_id,b.used_ublk*8/1024/1024 undousage
  2    FROM gv$session a, gv$transaction b
  3    WHERE a.saddr(+) = b.ses_addr and a.INST_ID(+) = b.INST_ID ;
   INST_ID        SID USERNAME                       STATUS       XIDUSN  USED_UREC  USED_UBLK       FLAG SQL_ID               UNDOUSAGE
---------- ---------- ------------------------------ -------- ---------- ---------- ---------- ---------- ------------- ----------------
         2       2299 TEST                           INACTIVE         15          1          1   16782851 avnd4v97v389t              .00

Well, it is just generated 1 UNDO block confirming my assumption.

Consistent gets – How Many?

February 10, 2014 Leave a comment

When you execute an SQL – why there is a difference in Consistent gets on the same set of data for same SQL.    For any SELECT query, oracle need to prepare the consistent data in the buffer cache using undo records then forward the data the requesting session as of a specific SCN.  Touching any block in buffer cache to prepare the blocks for consistent data is known as Consistent Reads.

In an ideal situation, to read a row from a block just need one consistent get.


SQL> select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 CON$

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

&nbsp;

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

&nbsp;

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

1  consistent gets

0  physical reads

0  redo size

589  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1  rows processed

But when I run the same SQL again after a transaction from different session, the number of consistent gets gone up to 3, just to read a single row.


SQL> select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 ABC

&nbsp;

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

&nbsp;

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

3  consistent gets

0  physical reads

108  redo size

588  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1  rows processed

The 3 consistent gets may be (but later found to be incorrect!!)

  • Access the current block
  • Create a new CONSISTENT block and copy the CURRENT block contents
  • Access the undo block and rollback the uncommitted transaction

So, what if I have one more transaction going on a different row on the same block?  Need to access one more undo block to rollback the second uncommitted transaction, so that you will get 4 consistent gets.


SQL> select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 ABC

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

5  consistent gets

0  physical reads

108  redo size

588  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1   rows processed

Contrary to my expectation of one more consistent get, the number of consistent gets grown by 2 reaching to 5 gets.  I am sure, that I need to read one extra undo block for the uncommitted transaction, but what is the 5th block?

10046 trace show me that I was totally wrong – even my earlier assumption.  There were only one block read from the table and rest of the 4 reads from the UNDO file.


WAIT #2: nam='db file sequential read' ela= 34 file#=1 block#=210889 blocks=1 obj#=63125 tim=39307229867247

WAIT #2: nam='db file sequential read' ela= 25 file#=3 block#=208 blocks=1 obj#=0 tim=39307229867390

WAIT #2: nam='db file sequential read' ela= 22 file#=3 block#=224 blocks=1 obj#=0 tim=39307229867454

WAIT #2: nam='db file sequential read' ela= 22 file#=3 block#=439 blocks=1 obj#=0 tim=39307229867585

WAIT #2: nam='db file sequential read' ela= 21 file#=3 block#=157 blocks=1 obj#=0 tim=39307229867696

First undo block


Block dump from disk:

buffer tsn: 2 rdba: 0x00c000d0 (3/208)

scn: 0x01a7.37e301cf seq: 0x01 flg: 0x04 tail: 0x01cf2601

frmt: 0x02 chkval: 0xd8b1 type: 0x26=KTU SMU HEADER BLOCK

Second undo block


buffer tsn: 2 rdba: 0x00c000e0 (3/224)

scn: 0x01a7.37e300bb seq: 0x01 flg: 0x04 tail: 0x00bb2601

frmt: 0x02 chkval: 0xa4d2 type: 0x26=KTU SMU HEADER BLOCK

3rd undo block


buffer tsn: 2 rdba: 0x00c001b7 (3/439)

scn: 0x01a7.37e301cf seq: 0x01 flg: 0x04 tail: 0x01cf0201

frmt: 0x02 chkval: 0xc91c type: 0x02=KTU UNDO BLOCK

4th undo block


buffer tsn: 2 rdba: 0x00c0009d (3/157)

scn: 0x01a7.37e300bb seq: 0x01 flg: 0x04 tail: 0x00bb0201

frmt: 0x02 chkval: 0x5f27 type: 0x02=KTU UNDO BLOCK

If you look closely, a pair of blocks under same SCN and the first block type is “KTU SMU HEADER BLOCK” is the undo header block and the second one is the undo block of type “KTU UNDO BLOCK”.  So, oracle is reading 1 consistent get to copy the current block, and for each transaction 1 undo block header and 1 undo block.  X$bh was also showing one more block in STATE 3, which I felt, is not reported in the consistent gets. That is the block read from the data file with the uncommitted transaction. Repeated the test with one more transaction, and got 7 consistent gets.


SQL>  select * from test where rowid='AAAPaVAABAAAzfJAAA';

C1 C2

---------- ----------

1 ABC

Execution Plan

----------------------------------------------------------

Plan hash value: 2153624467

-----------------------------------------------------------------------------------

| Id  | Operation                  | Name | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------

|   0 | SELECT STATEMENT           |      |     1 |    32 |     1   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY USER ROWID| TEST |     1 |    32 |     1   (0)| 00:00:01 |

-----------------------------------------------------------------------------------

Statistics

----------------------------------------------------------

0  recursive calls

0  db block gets

7  consistent gets

0  physical reads

108  redo size

588  bytes sent via SQL*Net to client

520  bytes received via SQL*Net from client

2  SQL*Net roundtrips to/from client

0  sorts (memory)

0  sorts (disk)

1   rows processed

In nutshell, how many maximum consistent gets can happen just to read one row from a table got 1 block with 50 different rows in the block?  In theory, there can be 50 different transactions can happen to the block concurrently.  So, the maximum number consistent get can be 2n+1 where n is the number of rows in the block.  So, there can be up to 101 consistent gets to read one row!

Elapsed Checkpoint Time

December 31, 2013 Leave a comment

Adieu to 2013 with this small post!  Happy new year to every one.

Oracle controls the incremental checkpoint frequency or interval based on the parameter FAST_START_MTTR_TARGET  or  LOG_CHECKPOINT_INTERVAL.  But, can you see the real checkpoint interval values?  What is the elapsed time between checkpoints?

Yes, you can.  The fixed table X$KCCDI holds SCN information about last checkpoint SCN and current SCN.  Column DICKP_SCN is the last checkpoint SCN and DICUR_SCN is the current SCN.

We get the CURRENT_SCN from v$database as,


SQL> select current_scn from v$database;

CURRENT_SCN

-----------

102707163

And looking at the X$KCCDI fixed tables, you can get the same SCN values.


SQL> select DICKP_SCN,DICUR_SCN from X$KCCDI;

DICKP_SCN        DICUR_SCN

---------------- ----------------

102711050        102711285

SQL> select current_scn from v$database;

CURRENT_SCN

-----------

102711289

Using the SCN_TO_TIMESTAMP function, you can convert the SCN values to time stamps.


SQL> select scn_to_timestamp(DICKP_SCN), scn_to_timestamp(DICUR_SCN) from X$KCCDI;

SCN_TO_TIMESTAMP(DICKP_SCN)                                                 SCN_TO_TIMESTAMP(DICUR_SCN)

--------------------------------------------------------------------------- ---------------------------------------------------------------------------

31-DEC-13 01.18.54.000000000 AM                                             31-DEC-13 01.31.31.000000000 AM

Now, it is easy to find the elapsed time of last SCN.


SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:1:51.000000000

The checkpoint elapsed time is depends on the load on the database.    When the database is idle, it took 3 minutes between the checkpoints – but that is not a hard limit.


SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:03:00.000000000

SQL>  select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:00.000000000

But the elapsed time went down to few seconds when I increased the load.


SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:20.000000000

SQL>  select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:00.000000000

&nbsp;

SQL>  select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:03.000000000

SQL> select scn_to_timestamp(DICUR_SCN) - scn_to_timestamp(DICKP_SCN)  from X$KCCDI;

SCN_TO_TIMESTAMP(DICUR_SCN)-SCN_TO_TIMESTAMP(DICKP_SCN)

---------------------------------------------------------------------------

+000000000 00:00:00.000000000

And a big good bye to 2013!

log_archive_dest_?? is ISPDB_MODIFIABLE?

December 20, 2013 1 comment

In Oracle 12c, some of the parameters can be changed in a pluggable database if the impact is limited to the current container.   However, the global parameters like SGA_TARGET can be set only through root container – CDB$ROOT.    A new column ISPDB_MODIFIABLE is added  in V$PARAMETER view to list the parameters modifiable from a PDB.


SQL> select count(*) from  v$parameter where ISPDB_MODIFIABLE='TRUE';

COUNT(*)

----------

171

There are 171 parameters listed to be modifiable from a PDB. Looking at the parameters closely, I was interested in the flowing parameters.


SQL> select name from v$parameter where name like '%log_archive%'  and ISPDB_MODIFIABLE='TRUE';

NAME

--------------------------------------------------------------------------------

log_archive_dest_1

log_archive_dest_2

log_archive_dest_3

log_archive_dest_4

log_archive_dest_5

log_archive_dest_6

log_archive_dest_7

log_archive_dest_8

log_archive_dest_9

log_archive_dest_10

log_archive_dest_11

log_archive_dest_12

log_archive_dest_13

log_archive_dest_14

log_archive_dest_15

log_archive_dest_16

log_archive_dest_17

log_archive_dest_18

log_archive_dest_19

log_archive_dest_20

log_archive_dest_21

log_archive_dest_22

log_archive_dest_23

log_archive_dest_24

log_archive_dest_25

log_archive_dest_26

log_archive_dest_27

log_archive_dest_28

log_archive_dest_29

log_archive_dest_30

log_archive_dest_31

log_archive_dest_state_1

log_archive_dest_state_2

log_archive_dest_state_3

log_archive_dest_state_4

log_archive_dest_state_5

log_archive_dest_state_6

log_archive_dest_state_7

log_archive_dest_state_8

log_archive_dest_state_9

log_archive_dest_state_10

log_archive_dest_state_11

log_archive_dest_state_12

log_archive_dest_state_13

log_archive_dest_state_14

log_archive_dest_state_15

log_archive_dest_state_16

log_archive_dest_state_17

log_archive_dest_state_18

log_archive_dest_state_19

log_archive_dest_state_20

log_archive_dest_state_21

log_archive_dest_state_22

log_archive_dest_state_23

log_archive_dest_state_24

log_archive_dest_state_25

log_archive_dest_state_26

log_archive_dest_state_27

log_archive_dest_state_28

log_archive_dest_state_29

log_archive_dest_state_30

log_archive_dest_state_31

log_archive_min_succeed_dest

63 rows selected.

Above 63 parameters said to be modifiable from a PDB, but it is contrary to the definition – only a PDB affected parameter can be modified.   Above parameters are global parameters which will affect  in the stance level as oracle is sharing a single set of redo log files for all PDBs.    So, this is not making senesce that we can modify the parameter from a PDB.  I tried to change the log_archive_dest_state_1 to defer from a PDB, but failed.


SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> alter system set log_archive_dest_state_1=defer;

alter system set log_archive_dest_state_1=defer

*

ERROR at line 1:

ORA-02097: parameter cannot be modified because specified value is invalid

ORA-65040: operation not allowed from within a pluggable database

Then I tried to change the value for the parameter  log_archive_dest_1 from a PDB which got failed agaian.  And I was able to change from the root container.


SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> alter system set log_archive_dest_1= 'location=/n01/oradata1/12cDB/arch1' container=current ;

alter system set log_archive_dest_1= 'location=/n01/oradata1/12cDB/arch1' container=current

*

ERROR at line 1:

ORA-02097: parameter cannot be modified because specified value is invalid

ORA-65040: operation not allowed from within a pluggable database

SQL> conn / as sysdba

Connected.

SQL> alter system set log_archive_dest_1= 'location=/n01/oradata1/12cDB/arch1' scope=both;

System altered.

So, my assumption is correct – even though these parameters are listed as modifiable, you can’t really modify them.  Had a quick search in Oracle Support came up a bug note:  17266585 for wrongly listed above parameters are modifiable, they are not.

Categories: Oracle Tags: , ,

Memory flushing and PDBs

December 19, 2013 Leave a comment

In a multitenant database environment, PDBs are plugged into a CDB container sharing a single SGA.  All the PDB objects are loaded into a single SGA component and there are no ISPDB_MODIFIABLE SGA parameters.  Looking at the V$LATCH_CHILDREN, all the CBC latches are defined under CON_ID  1 which is CDB$ROOT.


SQL> show CON_ID

CON_ID

------------------------------

1

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> select con_id ,count(*) from v$latch_children where name='cache buffers chains' group by con_id;

CON_ID   COUNT(*)

---------- ----------

0    16384



&nbsp;

And from a PDB, the CBC structures are still belongs to the ROOT container.



SQL> show con_id

CON_ID

------------------------------

3

SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL>  select con_id ,count(*) from v$latch_children where name='cache buffers chains' group by con_id;

CON_ID   COUNT(*)

---------- ----------

0      16384

Also, looking at the buffer  cache ‘working set’ , both the PDBs and CDB are sharing a single buffer cache working set.  From CDB


SQL> show con_id

CON_ID

------------------------------

1

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> select SET_ID,DBWR_NUM,CNUM_SET,POOL_ID from x$kcbwds;

SET_ID   DBWR_NUM   CNUM_SET    POOL_ID

---------- ---------- ---------- ----------

1          0          0          1

2          0          0          1

3          0          0          2

4          0          0          2

5          0      76713          3

6          0      76713          3

7          0          0          4

8          0          0          4

9          0          0          5

10          0          0          5

11          0          0          6

12          0          0          6

13          0          0          7

14          0          0          7

15          0          0          8

16          0          0          8

16 rows selected.

And the PDB  also pointing to the same working set.


SQL> show con_id

CON_ID

------------------------------

3

SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> select SET_ID,DBWR_NUM,CNUM_SET,POOL_ID from x$kcbwds;

SET_ID   DBWR_NUM   CNUM_SET    POOL_ID

---------- ---------- ---------- ----------

1          0          0          1

2          0          0          1

3          0          0          2

4          0          0          2

5          0      76713          3

6          0      76713          3

7          0          0          4

8          0          0          4

9          0          0          5

10          0          0          5

11          0          0          6

12          0          0          6

13          0          0          7

14          0          0          7

15          0          0          8

16          0          0          8

16 rows selected.

 

So, what will happen if we flush buffer cache from any container?  Since all the containers are sharing the same SGA, I was expecting oracle will flush out all the blocks from buffer cache despite of which PDB /CDB it is coming from.  But, I was totally wrong.  Oracle was able to identify the ownership and flushed out only the blocks belongs that specific container.

Both PDB and CDB shows no cached objects from the schema THOMAS.


SQL> show con_name

&nbsp;

CON_NAME

------------------------------

SALESDB

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0;

&nbsp;

no rows selected

And from the CDB


SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0;

&nbsp;

no rows selected

After selecting from the object, I can see the blocks are cached in the PDB.


SQL> select object_name from thomas.test where object_id=456;

OBJECT_NAME

------------------------------

STMT_AUDIT_OPTION_MAP

&nbsp;

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0;

&nbsp;

CON_ID OBJECT_NAME                    OWNER                               STATE

---------- ------------------------------ ------------------------------ ----------

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

 

Now I flushed the buffer cache from the CDB$ROOT expecting it will flush out all the blocks from the buffer cache including PDBs.


SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

SQL> !date

Thu Dec 19 11:53:50 UTC 2013

SQL> alter system flush buffer_cache;

System altered.

But contrary to my expectation, I was able to find the blocks cached in the buffer cache.


SQL> show con_name

CON_NAME

------------------------------

SALESDB

SQL> !date

Thu Dec 19 11:54:08 UTC 2013

SQL> set linesize 200

col object_name for a30

col owner for a30

select CON_ID,object_name,owner,STATE

from dba_objects o, x$bh b

where o.object_id = b.obj and owner  in ('THOMAS') and state <> 0; SQL> SQL> SQL>   2    3

CON_ID OBJECT_NAME                    OWNER                               STATE

---------- ------------------------------ ------------------------------ ----------

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

3 I_TEST                         THOMAS                                  1

Even though Oracle is using a single buffer cache for PDBs and CDB, still it is able to manage flushing caches and able to flush out only caches belongs to the current container.

Categories: Oracle Tags: , , ,