People – not just People rather they are your resources!
Other people are your greatest resources! They give birth to you; they feed you, dress you, provide you with money, make you laugh and cry. They comfort you, heal you, invest your money, service your car and bury you. We can’t live without them. We can’t even die without them.
CONTAINERS clause and LONG columns
We were working on a monitoring script using HIGH_VLAUES in CDB_TAB_PARTITIONS and found the HIGH_VALUE column is missing in the CDB_TAB_PARTITIONS while the column is existing in DBA_TAB_PARTITIONS. Couldn’t find any reference in Oracle documentation (may be need a better search!) Why the column is missing in CDA view. Looking at the CDB_TAB_PARTITIONS view definition, oracle is using CONTAINERS clause to fetch all partition information, except the column HIGH_VALUE
SQL>; select TEXT from dba_views where lower(VIEW_NAME) ='cdb_tab_partitions'; TEXT -------------------------------------------------------------------------------- SELECT "TABLE_OWNER","TABLE_NAME","COMPOSITE","PARTITION_NAME","SUBPARTITION_COU NT","HIGH_VALUE_LENGTH","PARTITION_POSITION","TABLESPACE_NAME","PCT_FREE","PCT_U SED","INI_TRANS","MAX_TRANS","INITIAL_EXTENT","NEXT_EXTENT","MIN_EXTENT","MAX_EX TENT","MAX_SIZE","PCT_INCREASE","FREELISTS","FREELIST_GROUPS","LOGGING","COMPRES SION","COMPRESS_FOR","NUM_ROWS","BLOCKS","EMPTY_BLOCKS","AVG_SPACE","CHAIN_CNT", "AVG_ROW_LEN","SAMPLE_SIZE","LAST_ANALYZED","BUFFER_POOL","FLASH_CACHE","CELL_FL ASH_CACHE","GLOBAL_STATS","USER_STATS","IS_NESTED","PARENT_TABLE_PARTITION","INT ERVAL","SEGMENT_CREATED","INDEXING","INMEMORY","INMEMORY_PRIORITY","INMEMORY_DIS TRIBUTE","INMEMORY_COMPRESSION","INMEMORY_DUPLICATE","CON_ID" FROM CONTAINERS("S YS"."DBA_TAB_PARTITIONS")
I tried to read the column individually, but only HIGH_VALUE is not visible in the root.
SQL> conn / as sysdba Connected. SQL> select table_name from containers(dba_tab_partitions) where rownum select high_value from containers(dba_tab_partitions) where rownum alter session set container = PDBL0020_1 ; Session altered. SQL> create table test (C1 varchar2(25), C2 number, c3 long ); Table created. SQL> insert into test values ('Thomas',3,'This is a test1'); 1 row created.
It is mandatory that we must create the table in CDB$ROOT as well to access the table from the root container.
SQL> conn / as sysdba Connected. SQL> create table test (C1 varchar2(25), C2 number, c3 long ); Table created.
Now read the read the TEST table from the PDB
SQL> alter session set container = PDBL0020_1 ; Session altered. SQL> select * from containers(test) where con_id =3; C1 C2 C3 CON_ID ------------------------- ---------- --------------- ---------- Thomas 3 This is a test1 3
You can see all the 3 columns including C3, the LONG columns. Now connect to the root, and issue the same query.
SQL> conn / as sysdba Connected. SQL> select * from containers(test) where con_id =3; C1 C2 CON_ID ------------------------- ---------- ---------- Thomas 3 3
Yes, column C3, which is LONG data type is missing! That means you can’t really access the LONG data type using CONTAINERS clause which is not a good thing. I haven’t seen any workaround or fix for this yet, please comment if you find any!
Oracle version
SQL> select * from v$version; BANNER CON_ID -------------------------------------------------------------------------------- ---------- Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 0 PL/SQL Release 12.1.0.2.0 - Production 0 CORE 12.1.0.2.0 Production 0 TNS for Linux: Version 12.1.0.2.0 - Production 0 NLSRTL Version 12.1.0.2.0 - Production 0
Changes – Personal and professional
Every organization needs to change for its survival and keep its competitiveness. While change is not an easy procedure to complete, it can affect many lives both in positively and negatively. The change owners may take intentionally or unintentionally wrong decisions which can impact the stakeholders. However, each individual should follow the same rules to come out of the change whether the individual it is affected positively or negatively. Only the emotions changes and that will be temporary phenomena.
Each individual is going through the changes every day of his / her life both in personal and professional life. Without change life is meaningless. One should compete to himself and do better next time when he faces a change. Success is a battle between YOU and YOURSELF only. So, how do you pass through a change? There are multiple stages –
Phase 1: The Announcement: Change will come in many ways – I will classify them into two broad categories though not completely complementary. They are PLANNED and UNPLANNED. Organizational changes are mostly planned ones, however there are situations we hit unplanned ones as well. All the incidents, like death of a near and dear one, is the best example for an unplanned change. Depending on the type of the change, it will trigger a verity of emotions – Happy, Sad, Disgust, Anger, Fear, Surprise etc. None of these feelings are bad or evil. Categorizing these emotions in to positive or negative itself is not correct. They are just reaction to change that is happening currently. Apply your human wisdom to control your emotions and accept the change.
Phase 2: Ending Phase: This is the second phase of the change where the impact of the announcement is over and recover from the emotions. Now you know that the change is undoable and you must face it. In this phase, the most important action is to accept the change which is undergoing and level your emotions to the reality. This is applicable for both happy and frustrating impacted people. This is the time to end mourning about loses and stop rejoicing on your gains. Accept the change and move to the next phase.
It is important that, you must experience the Ending phase and move to the next phase. Some of us are not come out of the emotions and tied to it for a long time. This prolonged phase -1 may harm the individuals and may lead them to both professional and personal challenges. If you feel that the ending phase is not happening, the emotions are still ruling you, time to consult an experienced therapist and seek help, before it destroy you. Much prolonged Phase-1 may lead to depression, revenge, crime and even suicide tendencies. The fast you have the ending phase, the better it will be. Remember one thing – that one announcement or an incident is not the end of the world!
Phase 2 – Transition plan: Every change challenges you with the new set of hurdles. This is a new beginning of a painful end. You must ask yourself “Where are you?”, “Where you want to be?” and “How do you get there?” to prepare the transition plans. You may use a different template depends on the type of challenges you have to manage. It can be personal, profession or social. Apply you earlier experience and outcome of the ending phase thoughts will help you build a unbiased transition plan. I use the term “Unbiased” to mention the importance of having a transition plan to achieve your goals and not to hurt those who caused the change. They are the past and irrelevant for your future goals and objectives.
Phase 4: BAU (Business As Usual): You are back on track and waiting for the next change and again go through the same process. Not every change can bring success and it can bring lot learning and that lead you to ultimate success. All the best!
LNS and LOG_BUFFER
In a oracle data guard environment Log Network Server (LNS) process transports the redo from the primary to the standby site. The behavior of LNS process is different from SYNC and ASYNC mode replication. In ASYNC mode transport, LNS read the redo from log buffer and hand over it to the RFS process in the target site. It is not necessary the redo is always available in the buffer cache. If there is not enough buffer to hold the rate of redo generation, LNS transport lags behind, then read from the Online Redo File causing physical I/O, reducing overall redo transport performance. So, it is important to ensure that, database is configured with enough log buffers so that redo will be available in the log buffer. In certain cases, we may need to analyze network performance and make sure the band width supporting the redo generation rate.
Oracle externalized buffer hit ratios through the view x$logbuf_readhist and the important columns are –
BUFSIZE – Actual and estimated buffer sizes. CURRENT row is the configured log buffer row
RDMEMBLKS – Actual and estimated reads from the log buffer
RDDISKBLKS – Actual and estimated reads from the Online Redo Log files
HITRATE – Memory hit ratio for the corresponding buffer size. CURRENT (BUFINFO) is the line for present log buffer. Ratio is calculated by RDMEMBLKS/( RDMEMBLKS+ RDDISKBLKS). It is important that we keep the hit ratios close to 100% in a healthy performing data guard environment.
SQL> select INST_ID,BUFSIZE,RDMEMBLKS,RDDISKBLKS,HITRATE,BUFINFO from x$logbuf_readhist INST_ID BUFSIZE RDMEMBLKS RDDISKBLKS HITRATE BUFINFO ---------- -------------------- ------------------------ ------------------------ ---------- -------------------- 2 40960K 227100628 422257113 34 2 49152K 260607089 388750652 40 2 57344K 295806637 353551104 45 2 65536K 347028496 302329245 53 2 73728K 544307366 105050375 83 2 81920K 548003937 101353804 84 CURRENT 2 90112K 557950886 91406855 85 2 98304K 563062694 86295047 86 2 106496K 567363494 81994247 87 2 114688K 570779558 78578183 87 2 122880K 573573030 75784711 88 2 131072K 576063398 73294343 88 2 139264K 577963942 71393799 89 2 147456K 579520422 69837319 89 2 155648K 580839334 68518407 89 2 163840K 582027174 67330567 89 TARGET-90 16 rows selected. SQL> show parameter log_buff NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ log_buffer integer 83886080 SQL> select 83886080/1024 from dual; 83886080/1024 ------------- 81920
In the above case LOG_BUFFER is configured to 81920K and the corresponding hit ratio is 84% showing possible LNS degrade in redo transport. In this case, upgrading LOG_BUFFER to a higher value or make sure there is no network bottleneck between primary and standby. This may improve over all dataguard syncing performance.
Unified Auditing – some insights
Oracle 12c Unified Auditing is a brand new feature in the latest Oracle version which consolidates database level auditing records into a single location. DBAs can access the audit information from the view UNIFIED_AUDIT_TRAIL for all kind audit records, and they are
SQL> select distinct COMPONENT from all_unified_audit_actions; COMPONENT ---------------------------------------------------------------- Direct path API RMAN_AUDIT Database Vault XS Label Security Datapump Standard FineGrainedAudit 8 rows selected.
UNIFIED_AUDIT_TRAIL is a view owned by SYS and a public synonym pointing to this view.
SQL> select owner,object_type from dba_objects where object_name='UNIFIED_AUDIT_TRAIL'; OWNER OBJECT_TYPE ---------- ------------------------------ SYS VIEW PUBLIC SYNONYM
Any database user who got ADUIT_ADMIN and AUDIT_VIEWER roles can administer and view the audit records respectively. Oracle separated the AUDITOR role from the traditional DBA/ SYSDBA roles so that and independent database auditor can operate outside the DBA group. To harden the security, no DBA or SYSDBA users can UPDATE or DELETE the audit records outside DBMS_AUDIT_MGMT.
Audit records are physically stored in AUDSYS schema along with other supporting indexes and BLOBS.
SSQL> select owner,object_type,object_name from dba_objects where owner='AUDSYS'; OWNER OBJECT_TYPE OBJECT_NAME ---------- ------------------------------ ------------------------------ AUDSYS INDEX CLI_LOB$7e10621c$1$1 AUDSYS INDEX PARTITION CLI_LOB$7e10621c$1$1 AUDSYS INDEX CLI_TIME$7e10621c$1$1 AUDSYS INDEX PARTITION CLI_TIME$7e10621c$1$1 AUDSYS LOB SYS_LOB0000091761C00014$$ AUDSYS LOB PARTITION SYS_LOB0000091761C00014$$ AUDSYS INDEX SYS_IL0000091761C00014$$ AUDSYS INDEX PARTITION SYS_IL0000091761C00014$$ AUDSYS INDEX CLI_SCN$7e10621c$1$1 AUDSYS INDEX PARTITION CLI_SCN$7e10621c$1$1 AUDSYS TABLE PARTITION CLI_SWP$7e10621c$1$1 AUDSYS TABLE CLI_SWP$7e10621c$1$1 12 rows selected.
You can see a table CLI_SWP$7e10621c$1$1 and the audit records are writing to this table and UNIFIED_AUDIT_TRAIL must be a view built on this table – just an assumption as the view is getting the records from the audit base table. Let us confirm that assumption by checking the execution plan for UNIFIED_AUDIT_TRAIL.
-------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 11738 | 1123K| 6 (34)| 00:00:01 | | | | |* 1 | HASH JOIN RIGHT OUTER | | 11738 | 1123K| 6 (34)| 00:00:01 | | | | | 2 | INDEX FAST FULL SCAN | I_STMT_AUDIT_OPTION_MAP | 308 | 1232 | 2 (0)| 00:00:01 | | | | | 3 | NESTED LOOPS | | 11738 | 1077K| 4 (50)| 00:00:01 | | | | |* 4 | HASH JOIN RIGHT OUTER| | 11738 | 997K| 4 (50)| 00:00:01 | | | | | 5 | INDEX FAST FULL SCAN| I_SYSTEM_PRIVILEGE_MAP | 237 | 1185 | 2 (0)| 00:00:01 | | | | | 6 | PX COORDINATOR | | | | | | | | | | 7 | PX SEND QC (RANDOM)| :TQ10000 | 11738 | 229K| 2 (100)| 00:00:01 | Q1,00 | P->S | QC (RAND) | |* 8 | VIEW | GV$UNIFIED_AUDIT_TRAIL | | | | | Q1,00 | PCWP | | |* 9 | FIXED TABLE FULL | X$UNIFIED_AUDIT_TRAIL | 11738 | 229K| 2 (100)| 00:00:01 | Q1,00 | PCWP | | |* 10 | INDEX UNIQUE SCAN | I_UNIFIED_AUDIT_ACTIONS | 1 | 7 | 0 (0)| 00:00:01 | | | | -------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("AUDIT_OPTION"="AOM"."OPTION#"(+)) 4 - access("SPX"."PRIVILEGE"(+)=(-"SYSTEM_PRIVILEGE")) 8 - filter("CON_ID"=0 OR "CON_ID"=1) 9 - filter("CON_ID"=0 OR "CON_ID"=1) 10 - access("AUDIT_TYPE"="ACT"."TYPE" AND "ACTION"="ACT"."ACTION") Note ----- - statement not queuable: gv$ statement Statistics ---------------------------------------------------------- 134589 recursive calls 0 db block gets 300896 consistent gets 0 physical reads 0 redo size 2057935 bytes sent via SQL*Net to client 85625 bytes received via SQL*Net from client 7736 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 116016 rows processed
That is bit interesting – the base table for UNIFIED_AUDIT_TRAIL is not the actual audit table in the SYSAUD schema, rather the records are coming from an X$ table known as X$UNIFIED_AUDIT_TRAIL and that is confusing. Moreover, the x$ fixed table is owned by SYS user, not in SYSAUD schema! You can see the fixed table X$UNIFIED_AUDIT_TRAIL existing in the V$FIXED_TABLE view.
SQL> select * from v$fixed_table where name='X$UNIFIED_AUDIT_TRAIL'; NAME OBJECT_ID TYPE TABLE_NUM CON_ID ------------------------------ ---------- ----- ---------- ---------- X$UNIFIED_AUDIT_TRAIL 4294953277 TABLE 641 0
So, there is some relation existing between the fixed table X$UNIFIED_AUDIT_TRAIL and CLI_SWP$7e10621c$1$1, the physical table where the actual data is existing. Moreover, every X$ table is a non-physical volatile table interfacing a memory structure and not really suitable to store the audit records. Trace 10046 should tell us more inside stories and relation between X$UNIFIED_AUDIT_TRAIL and CLI_SWP$7e10621c$1$1.
From the 10046 tracing came to a conclusion – the reading the audit records happening in two phases.
Phase 1: Oracle read the complete base audit table and populates the X$UNIFIED_AUDIT_TRAIL table for that container.
FETCH #140085662005752:c=0,e=28,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120329 FETCH #140085662005752:c=0,e=113,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120480 FETCH #140085662005752:c=0,e=25,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120548 FETCH #140085662005752:c=0,e=19,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120601 FETCH #140085662005752:c=0,e=18,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120651 FETCH #140085662005752:c=999,e=55,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120736 FETCH #140085662005752:c=0,e=24,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120806 FETCH #140085662005752:c=0,e=21,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120860 FETCH #140085662005752:c=0,e=24,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120916 FETCH #140085662005752:c=0,e=18,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850120966 FETCH #140085662005752:c=0,e=16,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850121014 FETCH #140085662005752:c=0,e=17,p=0,cr=3,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850121059 FETCH #140085662005752:c=0,e=16,p=0,cr=2,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850121104 FETCH #140085662005752:c=0,e=21,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850121157 FETCH #140085662005752:c=0,e=23,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=1,plh=1405023807,tim=10748850121212 STAT #140085662005752 id=1 cnt=102 pid=0 pos=1 obj=0 op='PARTITION RANGE SINGLE PARTITION: KEY 1 (cr=286 pr=1 pw=0 time=66 us cost=496 size=70729009 card=32579)' STAT #140085662005752 id=2 cnt=102 pid=1 pos=1 obj=91761 op='TABLE ACCESS BY LOCAL INDEX ROWID CLI_SWP$7e10621c$1$1 PARTITION: KEY 1 (cr=286 pr=1 pw=0 time=55 us cost=496 size=70729009 card=32579)' STAT #140085662005752 id=3 cnt=169 pid=2 pos=1 obj=91771 op='INDEX RANGE SCAN CLI_LOB$7e10621c$1$1 PARTITION: KEY 1 (cr=103 pr=1 pw=0 time=462 us cost=496 size=0 card=50122)' CLOSE #140085662005752:c=0,e=80,dep=2,type=1,tim=10748850121429
Phase : 2 – User get the audit records from the X$UNIFIED_AUDIT_TRAIL base table, need not worry about the rest of the containers audit data as it is filtered at the first place.
EXEC #140085626601992:c=0,e=46,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=1,plh=1999586355,tim=10748918362622 FETCH #140085626601992:c=0,e=18,p=0,cr=1,cu=0,mis=0,r=1,dep=2,og=1,plh=1999586355,tim=10748918362650 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 25 p1=0 p2=0 p3=0 obj#=91764 tim=10748918362746 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 7 p1=0 p2=0 p3=0 obj#=91764 tim=10748918363179 WAIT #140085668747360: nam='PX Deq Credit: send blkd' ela= 435 sleeptime/senderid=268632063 passes=1 qref=7974696744 obj#=91764 tim=10748918363179 WAIT #140085668747360: nam='Disk file operations I/O' ela= 246 FileOperation=2 fileno=0 filetype=36 obj#=91764 tim=10748918364034 WAIT #140085668747360: nam='Disk file operations I/O' ela= 1115 FileOperation=8 fileno=1 filetype=36 obj#=91764 tim=10748918365176 WAIT #140085668747360: nam='Disk file operations I/O' ela= 1346 FileOperation=5 fileno=0 filetype=36 obj#=91764 tim=10748918366577 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 31 p1=0 p2=0 p3=0 obj#=91764 tim=10748918366739 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 56 p1=0 p2=0 p3=0 obj#=91764 tim=10748918371873 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 9 p1=0 p2=0 p3=0 obj#=91764 tim=10748918371951 WAIT #140085668747360: nam='PX Deq Credit: send blkd' ela= 5155 sleeptime/senderid=268632063 passes=1 qref=7974696744 obj#=91764 tim=10748918371951 EXEC #140085668747360:c=22873523,e=158435095,p=32962,cr=231162,cu=889,mis=0,r=0,dep=1,og=1,plh=4119967587,tim=10748918372017 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 9 p1=0 p2=0 p3=0 obj#=91764 tim=10748918372133 WAIT #140085668747360: nam='PX Deq: reap credit' ela= 15 p1=0 p2=0 p3=0 obj#=91764 tim=10748918380111 WAIT #140085668747360: nam='PX Deq: Execution Msg' ela= 8930 sleeptime/senderid=268632063 passes=2 p3=7974696744 obj#=91764 tim=10748918381073 STAT #140085668747360 id=1 cnt=0 pid=0 pos=1 obj=0 op='HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us cost=8 size=448192054 card=11738)' STAT #140085668747360 id=2 cnt=0 pid=1 pos=1 obj=464 op='INDEX FAST FULL SCAN I_STMT_AUDIT_OPTION_MAP (cr=0 pr=0 pw=0 time=0 us cost=2 size=7084 card=308)' STAT #140085668747360 id=3 cnt=0 pid=1 pos=2 obj=0 op='HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us cost=6 size=447922080 card=11738)' STAT #140085668747360 id=4 cnt=0 pid=3 pos=1 obj=455 op='INDEX FAST FULL SCAN I_SYSTEM_PRIVILEGE_MAP (cr=0 pr=0 pw=0 time=0 us cost=2 size=5925 card=237)' STAT #140085668747360 id=5 cnt=0 pid=3 pos=2 obj=0 op='HASH JOIN (cr=0 pr=0 pw=0 time=0 us cost=4 size=447628630 card=11738)' STAT #140085668747360 id=6 cnt=0 pid=5 pos=1 obj=4598 op='TABLE ACCESS FULL ALL_UNIFIED_AUDIT_ACTIONS (cr=0 pr=0 pw=0 time=0 us cost=2 size=8711 card=281)' STAT #140085668747360 id=7 cnt=0 pid=5 pos=2 obj=0 op='PX COORDINATOR (cr=0 pr=0 pw=0 time=0 us)' STAT #140085668747360 id=8 cnt=0 pid=7 pos=1 obj=0 op='PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us cost=2 size=27995130 card=11738)' STAT #140085668747360 id=9 cnt=99506 pid=8 pos=1 obj=4294953278 op='VIEW GV$UNIFIED_AUDIT_TRAIL (cr=231043 pr=32962 pw=0 time=108383889 us)' STAT #140085668747360 id=10 cnt=99506 pid=9 pos=1 obj=0 op='FIXED TABLE FULL X$UNIFIED_AUDIT_TRAIL (cr=231043 pr=32962 pw=0 time=108237708 us cost=2 size=27995130 card=11738)'
So, that is where the fixed table X$UNIFIED_AUDIT_TRAIL come into play and this is not really going to be easy. Look at the below SELECT –
SQL> select * from unified_audit_trail; 158836 rows selected. Elapsed: 00:01:16.35
I just got 158836 audit records and it took 76.35 seconds! Where the time is spent? Look at the recursive calls. I believe, these are DB calls for populating the fixed table, X$UNIFIED_AUDIT_TRAIL from the SYSAUD physical table where the consolidated audit records are located. Compared to traditional tables, your audit table access will be much slow.
Statistics ---------------------------------------------------------- 194858 recursive calls 2285 db block gets 398738 consistent gets 46457 physical reads 140 redo size 84053638 bytes sent via SQL*Net to client 35188183 bytes received via SQL*Net from client 292940 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 158836 rows processed
Currently, I have only few days of STANDARD audit records for a single PDB. It will be interesting to see, what will be response time and memory usage if I am hosting few years of audit records for security compliance and for a number of PDBs. One needs to do test the real impact before enabling the unified auditing and retaining the records for a longer period.
Note: These are my own opinions and research and can be wrong. Do your research before implementing and I will not accept any liabilities or damages.
PARTIAL Indexes
Oracle PARTIAL INDEXES – a new feature in Oracle 12c, initial impression was really good, but the implementation is quite poor. With this feature, you can define INDEXING OFF for a partition so that any index with PARTIAL clause will skip this partition while creating the index. The advantage is you can selectively create indexes on partitions. And for a global index or a global partitioned index, the rows from the INDEXING OFF partitions will be skipped while creating the index.
You need to create the partitions with INDEXING OFF while creating the table to opt PARTIAL INDEXES
create table test (C1 number,c2 varchar2(2)) partition by list(c2) ( partition p1 values('A') INDEXING OFF, partition p2 values('B') INDEXING OFF, partition p3 values('C') INDEXING OFF, partition p4 values('D'));
There are 3 partitions with INDEXING OFF, while the partition P4 will get index by default.
SQL> select PARTITION_NAME,INDEXING from user_tab_partitions ; PARTITION_NAME INDE --------------- ---- P1 OFF P2 OFF P3 OFF P4 ON
Now, I have created indexes on both columns – two indexes.
SQL> create index i_c1 on test(c1) LOCAL indexing PARTIAL; Index created. SQL> create index i_c2 on test(c2) LOCAL indexing PARTIAL;SQL> Index created. SQL> select TABLE_NAME,INDEX_NAME from user_indexes; TABLE_NAME INDEX_NAME --------------- --------------- TEST I_C1 SQL> select PARTITION_NAME,INDEX_NAME,status from user_ind_partitions ; PARTITION_NAME INDEX_NAME STATUS --------------- --------------- -------- P1 I_C1 UNUSABLE P2 I_C1 UNUSABLE P3 I_C1 UNUSABLE P4 I_C1 USABLE P1 I_C2 UNUSABLE P2 I_C2 UNUSABLE P3 I_C2 UNUSABLE P4 I_C2 USABLE
My expectation was it will create two indexes on column P4, but Oracle created UNUSABLE indexes on rest of the columns where INDEXING is OFF. Not really impressed on the fact that, I have to keep UNUSABLES indexes in my production databases, just because of I choose the new feature, PARTIAL indexes! This is challenge to identify the difference between real UNUSABLE indexes and UNUSABLE indexes due to PARTIAL indexes. Well, Oracle should have avoided creating these unwanted objects if this is a serious implementation.
Now I re-built all the UNUSABLE indexes – just want to know how Oracle implemented this feature – it is going to stop me while re-building the indexes, because the option INDEXING OFF?
SQL> alter index i_c1 rebuild partition p1 tablespace users; Index altered. SQL> alter index i_c1 rebuild partition p2 tablespace users; Index altered. SQL> alter index i_c1 rebuild partition p3 tablespace users; Index altered. SQL> alter index i_c1 rebuild partition p4 tablespace users; Index altered. SQL> alter index i_c2 rebuild partition p4 tablespace users; Index altered. SQL> alter index i_c2 rebuild partition p3 tablespace users; Index altered. SQL> alter index i_c2 rebuild partition p2 tablespace users; Index altered. SQL> alter index i_c2 rebuild partition p1 tablespace users; Index altered.
Now all of my PARTIAL indexes are rebuilt and USABLE, but still, partitions say they are PARTIAL!
SQL> select PARTITION_NAME,INDEX_NAME,status,tablespace_name from user_ind_partitions ; PARTITION_NAME INDEX_NAME STATUS TABLESPACE_NAME --------------- --------------- -------- ------------------------------ P4 I_C2 USABLE USERS P3 I_C2 USABLE USERS P2 I_C2 USABLE USERS P1 I_C2 USABLE USERS P4 I_C1 USABLE USERS P3 I_C1 USABLE USERS P2 I_C1 USABLE USERS P1 I_C1 USABLE USERS 8 rows selected. SQL> select index_nAme,INDEXING from user_indexes; INDEX_NAME INDEXIN --------------- ------- I_C1 PARTIAL I_C2 PARTIAL
Is this something new in Oracle 12c? I dropped and re-created the index again with PARTIAL option.
SQL> drop index I_C1; Index dropped. SQL> drop index I_C2 ; Index dropped. SQL> create index i_c1 on test(c1) LOCAL indexing PARTIAL; Index created. SQL> create index i_c2 on test(c2) LOCAL indexing PARTIAL; Index created. SQL> select index_name,INDEXING from user_indexes; INDEX_NAME INDEXIN --------------- ------- I_C2 PARTIAL I_C1 PARTIAL SQL> select PARTITION_NAME,INDEX_NAME,status from user_ind_partitions ; PARTITION_NAME INDEX_NAME STATUS --------------- --------------- -------- P1 I_C1 UNUSABLE P2 I_C1 UNUSABLE P3 I_C1 UNUSABLE P4 I_C1 USABLE P1 I_C2 UNUSABLE P2 I_C2 UNUSABLE P3 I_C2 UNUSABLE P4 I_C2 USABLE 8 rows selected.
Since the index partitions are UNUSABLE, I was very curious to know what will happen if I set SKIP_UNUSABLE_INDEXES to FALSE, while thedefault value is TRUE.
SQL> alter session set skip_unusable_indexes =false; Session altered. SQL> insert into test select rownum n,'B' from all_objects where rownum < 10; insert into test select rownum n,'B' from all_objects where rownum < 10 * ERROR at line 1: ORA-01502: index 'USR1.I_C2' or partition of such index is in unusable state
As expected the PARTIAL indexes started failing with the error “ORA-01502: index ‘USR1.I_C2’ or partition of such index is in unusable state”! So, PARTIAL indexes is just a UNUSABLE LOCAL index which can implement in 11g as well!
However, INDEXING OFF got some good effect on GLOBAL indexes – partitioned or non-partitioned. Global indexes, indeed skip the rows from the INDEXING OFF partitions.
SQL> create index i_c1 on test(c1) ; Index created. SQL> select SEGMENT_NAME,BYTES,PARTITION_NAME from user_segments where SEGMENT_NAME='I_C1'; SEGMENT_NAME BYTES PARTITION_NAME -------------------- ---------- --------------- I_C1 7340032 SQL> drop index i_c1 ; Index dropped. SQL> create index i_c1 on test(c1) indexing partial; Index created. SQL> select SEGMENT_NAME,BYTES,PARTITION_NAME from user_segments where SEGMENT_NAME='I_C1'; SEGMENT_NAME BYTES PARTITION_NAME -------------------- ---------- --------------- I_C1 2097152 SQL> alter index i_c1 rebuild ; Index altered. SQL> select SEGMENT_NAME,BYTES,PARTITION_NAME from user_segments where SEGMENT_NAME='I_C1'; SEGMENT_NAME BYTES PARTITION_NAME -------------------- ---------- --------------- I_C1 2097152
The first index with FULL option created an index with size 7340032 bytes while the second option with PARTIAL index created with 2097152 bytes. That is cool – Oracle is able to skip the rows from INDEXING OFF partitions while creating the global index!
Tablespace growth history
Recently we had a tablespace space run out and ended up in an application failure. I have questioned my DBA and he just plainly blamed the application team members who loaded large number records without a prior notice. A convincing answer, but you can’t really fool Oracle.
Oracle introduced a new DBA_TABLESPACE_USAGE_METRICS view from 10g onwards to report the space usage with in a tablespace. I created a new tablespace and immediately space usage was reported in the view.
<pre>SQL> select * from DBA_TABLESPACE_USAGE_METRICS where TABLESPACE_NAME='MYDATA'; TABLESPACE_NAME USED_SPACE TABLESPACE_SIZE USED_PERCENT ------------------------------ ---------- --------------- ------------ MYDATA 128 64000 .2 SQL> select USED_SPACE*8/1024,TABLESPACE_SIZE*8/1024 from DBA_TABLESPACE_USAGE_METRICS where TABLESPACE_NAME='MYDATA'; USED_SPACE*8/1024 TABLESPACE_SIZE*8/1024 ----------------- ---------------------- 1 500 SQL> select file_name,AUTOEXTENSIBLE from dba_data_files where TABLESPACE_NAME='MYDATA'; FILE_NAME AUT -------------------------------------------------- --- /oradata1/test/mydata01.dbf NO
Both the USED_SPACE and TABLESPACE_SIZE are reported in number of database blocks. A non-auto extensible data file reported 500MB. I have made the data file auto extensible and the size of the tablespace grown to MAXSIZE of the data file.
SQL> alter database datafile '/oradata1/test/mydata01.dbf' autoextend on; Database altered. SQL> select USED_SPACE*8/1024,TABLESPACE_SIZE*8/1024 from DBA_TABLESPACE_USAGE_METRICS where TABLESPACE_NAME='MYDATA'; USED_SPACE*8/1024 TABLESPACE_SIZE*8/1024 ----------------- ---------------------- 1 32767.9844 SQL> select * from DBA_TABLESPACE_USAGE_METRICS where TABLESPACE_NAME='MYDATA'; TABLESPACE_NAME USED_SPACE TABLESPACE_SIZE USED_PERCENT ------------------------------ ---------- --------------- ------------ MYDATA 128 4194302 .003051759
That is really a cool stuff.
Now this view is captured as part of the AWR report in the history tables as DBA_HIST_TBSPC_SPACE_USAGE. So, it is easy for a DBA to report the growth of a tablespace with in the retention period. Even though my DBA said, the failure was because of a sudden data growth, I know where to see the data growth. And my report was –
SQL> select TABLESPACE_SIZE*8/1024/1024,TABLESPACE_MAXSIZE*8/1024/1024,TABLESPACE_USEDSIZE*8/1024/1024,RTIME from DBA_HIST_TBSPC_SPACE_USAGE where TABLESPACE_ID=33 order by SNAP_ID; TABLESPACE_SIZE*8/1024/1024 TABLESPACE_MAXSIZE*8/1024/1024 TABLESPACE_USEDSIZE*8/1024/1024 RTIME --------------------------- ------------------------------ ------------------------------- -------------------- 60 60 59.0448608 03/07/2015 12:00:06 60 60 59.0448608 03/07/2015 13:00:21 60 60 59.0448608 03/07/2015 14:00:03 60 60 59.0448608 03/07/2015 15:00:08 60 60 59.0448608 03/07/2015 16:00:19 60 60 59.0448608 03/07/2015 17:00:26 60 60 59.0448608 03/07/2015 18:00:29 60 60 59.0526733 03/07/2015 19:00:33 60 60 59.9413452 03/07/2015 20:00:37 62.9384766 90 60.0664063 03/07/2015 21:00:41 62.9384766 90 60.0664063 03/07/2015 22:00:50 62.9384766 90 60.2773438 03/07/2015 23:00:49 62.9384766 90 61.1416016 03/08/2015 00:00:07 62.9384766 90 61.2900391 03/08/2015 01:00:12 62.9384766 90 61.2900391 03/08/2015 03:00:19 62.9384766 90 61.2900391 03/08/2015 04:00:27 62.9384766 90 61.2900391 03/08/2015 05:00:31 62.9384766 90 61.2900391 03/08/2015 06:00:08 62.9384766 90 61.2900391 03/08/2015 07:00:10 62.9384766 90 61.2900391 03/08/2015 08:00:13 62.9384766 90 61.2900391 03/08/2015 09:00:14 62.9384766 90 61.2900391 03/08/2015 10:00:22 62.9384766 90 61.2900391 03/08/2015 11:00:22 62.9384766 90 61.2900391 03/08/2015 12:00:07 62.9384766 90 61.2900391 03/08/2015 13:00:34 62.9384766 90 61.2900391 03/08/2015 14:00:26 62.9384766 90 61.2900391 03/08/2015 15:00:31 62.9384766 90 61.2900391 03/08/2015 16:00:36 62.9384766 90 61.2900391 03/08/2015 17:00:39 62.9384766 90 61.2900391 03/08/2015 18:00:41 62.9384766 90 61.2900391 03/08/2015 19:00:43 62.9384766 90 61.2900391 03/08/2015 20:00:47 62.9384766 90 61.2900391 03/08/2015 21:00:52
The growth report is clearly showing me that the space was added at 03/07/2015 21:00:41 and the growth was really normal! Thanks to oracle.
x$bh and consistency
Oracle caches the data blocks in buffer cache in various modes depends on the block usage. As per the Oracle documentation it can CR (Consistent mode – reads), XCUR (Current mode – updates), FREE etc. I understand and other heard saying – whenever a block READs into memory will be in CR mode while if the block is fetching for UPDATE it will be in XCUR mode so that sessions can apply the transactions under the exclusive lock mode. And the working set of the buffer cache is divided into multiple working set and I believe each working set has a cold and hot region based on the touch count algorithm. Arming with these details, I did the following test and the results were little confusing – not just little!
My one row – column table TEST is as follws –
<pre>SQL> desc TEST Name Null? Type ----------------------------------------- -------- ---------------------------- C1 NUMBER SQL> select * from test; C1 ---------- 6
Then I flush the buffer cache, to clear the TEST table blocks. Also made sure there was no transactions active this point.
<pre>SQL> alter system flush buffer_cache; System altered. SQL> col object_name for a25 SQL> set linesize 132 SQL> select 2 indx 3 ,o.object_name 4 ,decode(state,0,'free',1,'xcur',2,'scur',3,'cr', 4,'read',5,'mrec' 5 ,6,'irec',7,'write',8,'pi') state 6 , dbarfil 7 , dbablk 8 , ba 9 ,tch 10 from x$bh b , dba_objects o 11 where b.obj = o.data_object_id 12 and o.object_name = 'TEST' 13 and state > 0; no rows selected SQL> select count(*) from v$transaction; COUNT(*) ---------- 0
Test 1:
I read one row to the memory using ROWID to enable single block read from the TEST table. Since I am reading – I was expecting a CR read.
<pre> SQL> conn abc/abc Connected. SQL> select * from test where rowid='AAAUxpAABAAA0MJAAA'; C1 ---------- 5
But to my surprise, the block under XCUR mode and the touch count was 1.
<pre> SQL> select indx 2 3 ,o.object_name 4 ,decode(state,0,'free',1,'xcur',2,'scur',3,'cr', 4,'read',5,'mrec' 5 ,6,'irec',7,'write',8,'pi') state 6 , dbarfil 7 , dbablk 8 , ba 9 ,tch 10 from x$bh b , dba_objects o 11 where b.obj = o.data_object_id 12 and o.object_name = 'TEST' 13 and state > 0; INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43737 TEST xcur 1 213769 00000003A7DFA000 1
I repeated the read one more time, and the same XCUR block increased the touch count to 2.
SQL> / INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43722 TEST xcur 1 213769 00000003A7DFA000 2
From the first session I have executed the SQL 4 more times and the count reached 6.
Note: I did this with a 5 second interval – I will explain that in my next entry.
SQL> / INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43722 TEST xcur 1 213769 00000003A7DFA000 6
I read from multiple sessions and found the touch count was increasing with the XCUR state.
Test 2:
From the first session, did an UPDATE to my TEST table from ABC user.
SQL> update test set c1=7 where rowid='AAAUxpAABAAA0MJAAA'; 1 row updated. Oracle applied that UDPATE to the same XCUR block and increased the touch count to 7 SQL> / INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43722 TEST xcur 1 213769 00000003A7DFA000 7
From the second ABC session I read the row and that created a CR block and more interestingly the touch count did not increase.
SQL> / INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43670 TEST cr 1 213769 00000003BBBCC000 1 43671 TEST xcur 1 213769 00000003A7DFA000 7
I read 5 more times from the second session and that created 5 more CR blocks for the XCUR block.
SQL> / INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43665 TEST cr 1 213769 00000003B4C5A000 1 43666 TEST cr 1 213769 00000003ABCE0000 1 43667 TEST cr 1 213769 00000003B6CD8000 1 43668 TEST cr 1 213769 00000003AC36A000 1 43669 TEST cr 1 213769 00000003A5030000 1 43670 TEST cr 1 213769 00000003BBBCC000 1 43671 TEST xcur 1 213769 00000003A7DFA000 7 7 rows selected.
When I read the 6th time, the count of CR blocks did not increase and that was expected. At any point in time, a single block can have maximum of 7 versions in the buffer cache. Any subsequent reads will use one of the existing CR block to build the new CR block. The more interesting finding was, the touch count was never increased during a CR block build thus the access was unaccounted. Each time Oracle build a new CR block and that block was send to the user, hence the original XCUR block touch count was not increased. I am not sure why Oracle showing this discrimination to XCUR blocks which is having an active transaction.
Then I did execute the READ from the session where the transaction was active – and that increased the touch count again – to 8!
SQL> / INDX OBJECT_NAME STATE DBARFIL DBABLK BA TCH ---------- ------------------------- ----- ---------- ---------- ---------------- ---------- 43665 TEST cr 1 213769 00000003B4C5A000 1 43666 TEST cr 1 213769 00000003ABCE0000 1 43667 TEST cr 1 213769 00000003B6CD8000 1 43668 TEST cr 1 213769 00000003AC36A000 1 43669 TEST cr 1 213769 00000003A5030000 1 43670 TEST xcur 1 213769 00000003A7DFA000 8 6 rows selected.
I will not give much importance to reduction of 1 CR block as the temporary CR blocks can re-use any time.
So, my assumptions are –
- When Oracle reads a block in to buffer cache first time it will be in XCUR mode not in CR mode
- I consider this XCUR list as Master List of blocks and all the updates will go the master XCUR list only. Master List can be read directly if no transactions are active.
- Under no transactions – any READ will reuse the XCUR block and increase the touch count
- Too many consistent reads leads to CBC Latch waits as everyone will be contending on the XCUR block list
- A transaction will modify the XCUR block and will increase the touch count
- Any other session reading a XCUR block will create fresh CR block and rollback the transactions. There are can be maximum of 7 versions of a block can exist. During the CR block access, the touch count will not increase
- Transaction session read directly from the XCUR block without creating a CR block and the touch count will increase.
Bind peeking and session cached cursor
Bind peeking is a nice feature in Oracle to have many optimized plans for an SQL for various bind values. DBAs believe that bind peeking happens during a soft parse which will identify an alternate plan. Why do I say that?
Hard Parse: Parsing first time, nothing exists to bind peek
Soft Parse : SQL cursor is existing and executing not the first time. Under the soft parse, bind peeking will happen and the new plan will be generated based on the selectivity for that literal.
Session Cached cursor (Softer Soft Parse): Cursor is existing in the PGA, bind the new value and just execute. This is the optimal way of an SQL execution – parse once and execute many. Less CPU, less or no latches – just bind and execute.
Question for the session cached cursor – since there is no soft parsing , can bind peeking happen to a session cached cursor? Can binding will force a ‘bind peeking’ and force for soft parse if the bind value is not safe? I asked this question to an industry expert and he believes that a session cached cursor can never do bind peeking – it will just bind and execute. But, that is a dangerous situation as it can produce inefficient plans just because it is executing a cached SQL.
So, let me test it!
My test table ABC got 117455 rows for OBJECT_ID=2 and just one row for OBJECT_ID=1 with an index on OBJECT_ID. For the bind peeking test, 10046 trace shows me two different plans for OBJECT_ID=1 and 2 – one is full table scan while other is index scan.
The test SQL is,
select count(OBJECT_TYPE) from abc where object_id=:oid ;
With object_id=1
PARSING IN CURSOR #47782675682600 len=56 dep=0 uid=70 oct=3 lid=70 tim=1421932643763296 hv=3165798175 ad='1f7dd8570' sqlid='99pd5nyyb4gsz' select count(OBJECT_TYPE) from abc where object_id=:oid END OF STMT PARSE #47782675682600:c=0,e=440,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1421932643763295 BINDS #47782675682600: Bind#0 oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00 oacflg=03 fl2=1000000 frm=00 csi=00 siz=24 off=0 kxsbbbfp=2b7545793550 bln=22 avl=02 flg=05 value=1 EXEC #47782675682600:c=1000,e=1336,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=2627152604,tim=1421932643764692 WAIT #47782675682600: nam='SQL*Net message to client' ela= 3 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932643764734 FETCH #47782675682600:c=0,e=54,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=1,plh=2627152604,tim=1421932643764817 STAT #47782675682600 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=4 pr=0 pw=0 time=63 us)' STAT #47782675682600 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=43 us cost=4 size=11 card=1)' STAT #47782675682600 id=3 cnt=1 pid=2 pos=1 obj=20551 op='INDEX RANGE SCAN I_OBJECT_ID (cr=3 pr=0 pw=0 time=33 us cost=3 size=0 card=1)' WAIT #47782675682600: nam='SQL*Net message from client' ela= 220 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932643765178 FETCH #47782675682600:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=2627152604,tim=1421932643765213 WAIT #47782675682600: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932643765253 WAIT #47782675682600: nam='SQL*Net message from client' ela= 219 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932643765489 *** SESSION ID:(2299.43369) 2015-01-22 07:17:23.767
And with OBJECT_ID=2, is a FULL table scan
PARSING IN CURSOR #47782683021480 len=57 dep=0 uid=70 oct=3 lid=70 tim=1421932662564677 hv=860935858 ad='2353e0480' sqlid='am229vwtp1ppk' select count(OBJECT_TYPE) from abc where object_id=:oid END OF STMT PARSE #47782683021480:c=0,e=374,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1421932662564676 BINDS #47782683021480: Bind#0 oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00 oacflg=03 fl2=1000000 frm=00 csi=00 siz=24 off=0 kxsbbbfp=2b75455c03a0 bln=22 avl=02 flg=05 value=2 EXEC #47782683021480:c=1000,e=1225,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=1045519631,tim=1421932662565992 WAIT #47782683021480: nam='SQL*Net message to client' ela= 3 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932662566033 FETCH #47782683021480:c=20996,e=21498,p=0,cr=1590,cu=0,mis=0,r=1,dep=0,og=1,plh=1045519631,tim=1421932662587561 STAT #47782683021480 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=1590 pr=0 pw=0 time=21489 us)' STAT #47782683021480 id=2 cnt=117455 pid=1 pos=1 obj=20550 op='TABLE ACCESS FULL ABC (cr=1590 pr=0 pw=0 time=27469 us cost=447 size=1305040 card=118640)' WAIT #47782683021480: nam='SQL*Net message from client' ela= 252 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932662588004 FETCH #47782683021480:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1045519631,tim=1421932662588047 WAIT #47782683021480: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932662588069 WAIT #47782683021480: nam='SQL*Net message from client' ela= 379 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1421932662588485 *** SESSION ID:(2299.43369) 2015-01-22 07:17:42.589
Then I executed the below SQL, to see the behavior –
ALTER SESSION SET events '10046 trace name context forever, level 12'; VARIABLE oid NUMBER; EXECUTE :oid := 1; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ;
Looking at the cursor closing type=3 shows that, the cursor closing with caching and that is the expected behavior.
$: grep 47370175687784 xxxx_ora_10255.trc | grep type CLOSE #47370175687784:c=0,e=15,dep=0,type=1,tim=1421938140372548 CLOSE #47370175687784:c=0,e=11,dep=0,type=3,tim=1421938140493515 CLOSE #47370175687784:c=0,e=13,dep=0,type=3,tim=1421938140509940 CLOSE #47370175687784:c=0,e=11,dep=0,type=3,tim=1421938140560694 CLOSE #47370175687784:c=0,e=10,dep=0,type=3,tim=1421938140584517 CLOSE #47370175687784:c=0,e=11,dep=0,type=3,tim=1421938140593613 CLOSE #47370175687784:c=0,e=12,dep=0,type=3,tim=1421938140681558 CLOSE #47370175687784:c=0,e=12,dep=0,type=3,tim=1421938140788580 CLOSE #47370175687784:c=0,e=11,dep=0,type=3,tim=1421938140793258 CLOSE #47370175687784:c=0,e=12,dep=0,type=3,tim=1421938140797589 CLOSE #47370175687784:c=0,e=11,dep=0,type=3,tim=1421938140815194 CLOSE #47370175687784:c=0,e=12,dep=0,type=3,tim=1421938140866576 CLOSE #47370175687784:c=0,e=10,dep=0,type=3,tim=1421938140890609 CLOSE #47370175687784:c=0,e=11,dep=0,type=3,tim=1421938140901529 CLOSE #47370175687784:c=0,e=10,dep=0,type=3,tim=1421938140914521 CLOSE #47370175687784:c=0,e=8,dep=0,type=0,tim=1421938145465058
Here are the various values for TYPE while closing the cursor.
- type 0 : Cursor is closed and not added to the session cache
- type 1 : Cursor is closed but added to the session cache without flushing out an existing cursor
- type 2 : Cursor is closed but added to the session cache after flushing out an existing cursor in cache
- type 3 : Cursor is already existing in the cache
Now, let us see what will happen when we execute the below statement –
ALTER SESSION SET events '10046 trace name context forever, level 12'; VARIABLE oid NUMBER; EXECUTE :oid := 1; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; EXECUTE :oid := 2; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; EXECUTE :oid := 1; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ; select count(OBJECT_TYPE) from abc where object_id=:oid ;
First 5 times, execute with a bind value 1 which will cache Indexed read plan cursor in the session cache and switch the bind to 2 which will do a full table scan if it is doing a bind peeking. If bind peeking is not happening, it will end up with a wrong plan. Looking at the trace files
$ : grep obj=20550 xxxxxx_ora_15962.trc STAT #48000122322616 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=57 us cost=4 size=11 card=1)' STAT #48000122322616 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=39 us cost=4 size=11 card=1)' STAT #48000122322616 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=39 us cost=4 size=11 card=1)' STAT #48000122322616 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=36 us cost=4 size=11 card=1)' STAT #48000122322616 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=40 us cost=4 size=11 card=1)' STAT #48000122321672 id=2 cnt=1879295 pid=1 pos=1 obj=20550 op='TABLE ACCESS FULL ABC (cr=24667 pr=0 pw=0 time=226535 us cost=6907 size=20702825 card=117455)' STAT #48000122321672 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=12 us cost=4 size=11 card=1)' STAT #48000122321672 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=15 us cost=4 size=11 card=1)' STAT #48000122321672 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=36 us cost=4 size=11 card=1)' STAT #48000122321672 id=2 cnt=1 pid=1 pos=1 obj=20550 op='TABLE ACCESS BY INDEX ROWID ABC (cr=4 pr=0 pw=0 time=13 us cost=4 size=11 card=1)'
Interesting, bind peeking is happening for a session cached SQL! If you look closely SQL started with a cursor ID 48000122322616 for the initial index read and closed that session cached cursor and opened a new cursor 48000122321672 for FULL table scan. However, when the bind value change back to 1, plan got changed under the same cursor ID. I assume this is because, both plans were cached under the same cursor, so that a session cached cursor was able to execute both plans under the same cursor ID. Oracle is doing a great job here and avoiding sub-optimal plans for a session cached cursor. But an unanswered question is, how Oracle was able to do bind peeking under ‘softer soft parse’.
CRS Timezone
Interesting problem – database was running normally without any issues and added to OCR. But the log file started reporting a different time for all activities – for example.
Database and server is showing a time 4:25 while alertlog is ahead of 5 hours @9:25!
<pre> oracle@prod # tail alert_xxxxxxx.log Recovery of Online Redo Log: Thread 2 Group 16 Seq 129057 Reading mem 0 Mem# 0: +ORAINDEX1/prod/onlinelog/group_16.282.864560747 Mon Jan 12 09:20:39 2015 Media Recovery Waiting for thread 1 sequence 129049 (in transit) Recovery of Online Redo Log: Thread 1 Group 15 Seq 129049 Reading mem 0 Mem# 0: +FLASH/prod/onlinelog/group_15.5059.864560603 Mon Jan 12 09:20:55 2015
Both database and server was showing almost 5 hrs less than the alert long entries.
oracle@prod # date Mon Jan 12 04:26:34 EST 2015 SQL> select to_char(sysdate,'DD-Mon HH24:MI:SS') from dual; TO_CHAR(SYSDATE,'DD-MONHH24:MI:SS') --------------------------------------------------------------------------- 12-Jan 04:28:46
Also, found the CSSD log is showing 5 hrs ahead of DB and server time!
grid@prod # tail ocssd.log 2015-01-12 09:12:16.373: [ CSSD][24]clssnmSendingThread: sending status msg to all nodes 2015-01-12 09:12:16.373: [ CSSD][24]clssnmSendingThread: sent 4 status msgs to all nodes 2015-01-12 09:12:20.375: [ CSSD][24]clssnmSendingThread: sending status msg to all nodes 2015-01-12 09:12:20.375: [ CSSD][24]clssnmSendingThread: sent 4 status msgs to all nodes 2015-01-12 09:12:24.377: [ CSSD][24]clssnmSendingThread: sending status msg to all nodes 2015-01-12 09:12:24.377: [ CSSD][24]clssnmSendingThread: sent 4 status msgs to all nodes 2015-01-12 09:12:28.379: [ CSSD][24]clssnmSendingThread: sending status msg to all nodes 2015-01-12 09:12:28.379: [ CSSD][24]clssnmSendingThread: sent 4 status msgs to all nodes 2015-01-12 09:12:32.381: [ CSSD][24]clssnmSendingThread: sending status msg to all nodes 2015-01-12 09:12:32.381: [ CSSD][24]clssnmSendingThread: sent 4 status msgs to all nodes
The problem was – the server was setup with a time zone TZ=US/Eastern while the CRS was installed with the time zone GMT. You can check the CRS time zone at $GRID_HOME/crs/install/ s_crsconfig_<SERVER NAME>_env.txt
<pre> grid@ # more s_crsconfig_xxxxxx_env.txt ### This file can be used to modify the NLS_LANG environment variable, which determines the charset to be used for messages. ### For example, a new charset can be configured by setting NLS_LANG=JAPANESE_JAPAN.UTF8 ### Do not modify this file except to change NLS_LANG, or under the direction of Oracle Support Services TZ=GMT NLS_LANG=ENGLISH_UNITED KINGDOM.WE8ISO8859P1
Then, why the database was showing same as server time? Being a CRS resource, should have been inherited the CRS time? Remember, you can configure the database with a different time zone other than the CRS timezone – means in the same server – each database can take its own timezone. You can view / set time zone for a database by
oracle@prod # srvctl getenv database -d xxxxxx xxxxxx: TZ=EST5EDT LIBPATH=/xxxs/app/oracle/product/11.2.0/db_1/lib
So, even though CRS time zone is GMT, database level timezone is set to EST5EDT which is US/Eastern.
You may set DB level timezone by
srvctl setenv database -d <dbname> -t 'TZ=<the required time zone>'
enq: SQ – contention
Yesterday, I have seen huge waits “enq SQ – contention’” – in every snapshot there were thousands of waits. But the fix was so simple! Here is the root cause of the issue –
When you select from a sequence, the NEXTVAL generated from a the seq$ table if it is not cached. If it is cached, it will be available in a memory structure and no need to generate the value which is a costly affair under the wait enq SQ – contention. Below test shows what will happen for this wait from NOCACHE to CACHE 1000 NEXTVALs.
I have executed the below SQL from 20 concurrent sessions..
<pre> declare col1 number; begin for a in 1..20000 loop select s.nextval into col1 from dual; end loop; end; /
Used below statements to run this SQL from 20 different sessions.
<pre>nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s & nohup sqlplus abc/abc @s &
With NOCACHE
SQL> create sequence s; select count(*),sum(time_waited) from v$active_session_history where event='enq: SQ - contention' and to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (14,15,16,17,18,19,20); COUNT(*) SUM(TIME_WAITED) ---------- ---------------- 2336 1677503813
With CACHE 50
SQL> alter sequence s cache 50; Sequence altered. SQL> select count(*),sum(time_waited) from v$active_session_history where event='enq: SQ - contention' and to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (21,22,23,24,25,26,27); COUNT(*) SUM(TIME_WAITED) ---------- ---------------- 701 459026341
With CACHE 500
SQL> alter sequence s cache 500; Sequence altered. SQL> select count(*),sum(time_waited) from v$active_session_history where event='enq: SQ - contention' and to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (23,24,25,26,27); COUNT(*) SUM(TIME_WAITED) ---------- ---------------- 192 96392664
With CACHE 1000
SQL> alter sequence s cache 1000 ; Sequence altered. SQL> select count(*),sum(time_waited) from v$active_session_history where event='enq: SQ - contention' and to_char(SAMPLE_TIME,'DDMONHH24')='09JAN04' and to_char(SAMPLE_TIME,'MI') in (29,30,31,32,33,34,35,36,37); COUNT(*) SUM(TIME_WAITED) ---------- ---------------- 84 23352139
With huge concurrency, the number of waits comes down from 2336 (time:1677503813 ) to 84 (time: 23352139) by adjusting the sequence CACHE.
Version:
SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Solaris: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production