Tablespace : is logical storage unit in Oracle Database.
ii) Tablespace connsit of one or more datafiles (check below)
iii) Information about Tablespace can be obtained from view DBA_TABLESPACES, USER_TABLESPACES, DBA_TEMP_FILES (for Temporary Tablespaces)
iv) Tablespace is further divided in to logical units Segments (Segment is divided in to Extent and Extent in to Block) . To know more about Segment, Extents and Blocks click here
v) Various type of tablespace are BIGFILE, SYSTEM, SYSAUX, UNDO

Datafiles : is physical structure to store oracle data
ii) One or more physical datafile are logically grouped together to make a tablespace
iii) Information about Datafile can be obtained from view DBA_DATA_FILES
iv) A Datafile can be associated with only one tablespace

Adding Tablespace in database
Use CREATE TABLESPACE command to add tablespace in Database like
CREATE TABLESPACE DATAFILE
CREATE TABLESPACE my_tablespace DATAFILE ’/u01/oracle/oradata/data01.dbf’;

To create undo tablespace
CREATE UNDOTABLESPACE DATAFILE SIZE [AUTOEXTEND ON|OFF] [RETENTION GURANTEE|NOGURANTEE]

To create Temporary tablespace
CREATE TEMPORARYTABLESPACE TEMPFILE SIZE [AUTOEXTEND ON|OFF]

Adding Datafile in a Tablespace
Use ALTER TABLESPACE to add datafile in tablespace like
ALTER TABLESPACE ADD DATAFILE
ALTER TABLESPACE my_tablespace ADD DATAFILE ’/u01/oracle/oradata/data02.dbf’;

To add temporary file in TEMP table space
ALTER TABLESPACE ADD TEMPFILE ‘’ SIZE ;

Modify Datafile
You can modify datafile using ALTER DATABASE command like
ALTER DATABASE DATAFILE AUTOEXTEND ON|OFF NEXT MAXSIZE ;

ALTER DATABASE DATAFILE ‘/u01/oracle/oradata/data02.dbf’ AUTOEXTEND ON NEXT 30M MAXSIZE 1200M;

which means datafile data02.dbf can automatically grow upto 1200 MB size in blocks of 30 MB each time as required.

New features in Oracle Database 11g Release 2 (11.2) that I consider significant.

Adaptive Cursor Sharing in Oracle Database 11g Release 1 – Overcome the problems associated with using bind variables against indexed columns containing skewed data.

Automated Database Maintenance Task Management in Oracle Database 11g Release 1 – Reduce the impact of automated maintenance tasks on your system by controlling how and when they run.

Automatic Memory Management in Oracle Database 11g Release 1 – Oracle 11g takes the next step down the road to making manual memory management a thing of the past.

Automatic SQL Tuning in Oracle Database 11g Release 1 – Take advantage of the Automatic SQL Tuning features of Oracle 11g Release 1.

Automatic Storage Manager (ASM) Enhancements in Oracle Database 11g Release 1 – This article provides an overview of the main Automatic Storage Manager (ASM) enhancements in Oracle Database 11g Release 1.

AWR Baseline Enhancements in Oracle Database 11g Release 1 – Oracle 11g has extended the AWR baseline functionality and made alert threshold creation more flexible.

Case Sensitive Passwords in Oracle Database 11g Release 1 – Understand the implications and administration of this new security feature in Oracle 11g.

Cross-Session PL/SQL Function Result Cache in Oracle Database 11g Release 1 – Improve the performance of PL/SQL functions across the whole database instance by caching return values.

Database Replay in Oracle Database 11g Release 1 – Capture workloads on a production system and replay them exactly as they happened on a test system.

Database Resident Connection Pool (DRCP) in Oracle Database 11g Release 1 – Use the database resident connection pool to reduce the resource requirements of applications that do not support connection pooling.

Data Recovery Advisor in Oracle Database 11g Release 1 – Automatically identify, diagnose and repair corruption or loss of persistent data on disk using this new feature.

DDL With the WAIT Option (DDL_LOCK_TIMEOUT) – Avoid unnecessary “ORA-00054: resource busy” errors in 11g.

Fine-Grained Access to Network Services in Oracle Database 11g Release 1 – Take control of the external services accessed by the UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL packages.

Finer Grained Dependency Management in Oracle Database 11g Release 1 – See how finer grained dependency management in 11g reduces the extent of invalidations associated with schema changes.

Flashback and LogMiner Enhancements in Oracle Database 11g Release 1 – See how Oracle 11g Release 1 makes tracking data changes and recovering from logical errors easier because of enhancements to Flashback and LogMiner.

Invisible Indexes in Oracle Database 11g Release 1 – Take control of the indexes available to the optimizer by using invisible indexes in 11g.

Miscellaneous New Features in Oracle Database 11g Release 1 – A collection of new features and enhancements listed in the miscellaneous section of “Oracle Database 11G: New Features for Administrators” OCP syllabus.

Online Table Redefinition Enhancements in Oracle Database 11g Release 1 – Take advantage of the reduced invalidations associated with online table redefintions in 11g.

Partitioning Enhancements in Oracle Database 11g Release 1 – An introduction to the partitioning enhancements in Oracle 11g Release 1.

Query Result Cache in Oracle Database 11g Release 1 – Improve the performance of SQL across the whole database instance by caching query results.

Read-Only Tables in Oracle Database 11g Release 1 – Protect data in static tables using the read-only table feature in Oracle 11g Release 1.

Resource Manager Enhancements in Oracle Database 11g Release 1 – An overview of the resource manager enhancements in Oracle Database 11g Release 1.

RMAN Enhancements in Oracle Database 11g Release 1 – An overview of all the RMAN enhancements in Oracle Database 11g Release 1.

Scheduler Enhancements in Oracle Database 11g Release 1 – Oracle 11g adds more functionality to the Oracle scheduler.

SecureFiles in Oracle Database 11g Release 1 – Compress, deduplicate and encrypt LOBs using the new LOB functionality in Oracle 11g.

SQL Access Advisor in Oracle Database 11g Release 1 – Use this advisor to get suggestions for indexes, materialized views and partitioning schemes to improve system performance.

SQL Performance Analyzer in Oracle Database 11g Release 1 – Compare the performance of the statements in an SQL tuning set before and after database changes.

SQL Plan Management in Oracle Database 11g Release 1 – Maintain consistent SQL performance regardless of changes in optimizer version, optimizer statistics, schema changes, system settings and SQL profile creation.

Statistics Collection Enhancements in Oracle Database 11g Release 1 – Provide the optimizer with additional selectivity information and avoid new statistics adversely affecting performance.

Table Compression Enhancements in Oracle Database 11g Release 1 – Save space and improve performance of OLTP systems using the enhanced table compression in 11g.

Tablespace Encryption in Oracle Database 11g Release 1 – Simplify the administration of transparent data encryption (TDE) using tablespace encryption.

Temporary Tablespace Enhancements in Oracle Database 11g Release 1 – Take advantage of better visibility and management of temporary tablespaces usage in 11g.

Upgrading to Oracle Database 11g – This article provides a brief overview of the areas involved in upgrading existing databases to Oracle 11g.

Oracle’s Statspack Utility
A free tool for monitoring your Oracle database instance. That article also discussed how to generate a Statspack Report. This article will help you interpret a lot of the information you will find in a statspack report.

The Statspack Header
The beginning of the statspack report shows you some basic information about your instance including the database name, instance name, DB ID, version, host and the start and end times of the snapshots used in your report. Here is an example:
STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
———— ———– ———— ——– ———– ——- ————
ORCL 2586436430 ORCL 1 9.2.0.4.0 NO localhost

Snap Id Snap Time Sessions Curs/Sess Comment
——- —————— ——– ——— ——————-
Begin Snap: 4873 13-Dec-05 05:00:05 110 37.4
End Snap: 4875 13-Dec-05 07:00:04 651 203.7
Elapsed: 119.98 (mins)

Cache Sizes
The next section, Cache Sizes, shows you some of your instance settings including: Buffer Cache (DB_CACHE_SIZE), Standard Block Size (DB_BLOCK_SIZE), Shared Pool Size (SHARED_POOL_SIZE), and Log Buffer (LOG_BUFFER). These are all instance parameters which you can modify in your spfile/pfile. :
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 3,008M Std Block Size: 8K
Shared Pool Size: 1,920M Log Buffer: 10,240K

Load Profile
The “Load Profile” section shows you the load on your instance per second and per transaction. You can compare this section between two Statspack Reports to see how the load on your instance is increasing or decreasing over time.
• Redo Size & Block Changes Increase: If you see an increase here then more DML statements are taking place (meaning your users are doing more INSERTs, UPDATEs, and DELETEs than before.
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
————— —————
Redo size: 352,535.71 8,517.66
Logical reads: 202,403.30 4,890.29
Block changes: 2,713.47 65.56
Physical reads: 44.22 1.07
Physical writes: 27.46 0.66
User calls: 787.32 19.02
Parses: 301.40 7.28
Hard parses: 0.05 0.00
Sorts: 317.78 7.68
Logons: 0.10 0.00
Executes: 2,975.84 71.90
Transactions: 41.39

% Blocks changed per Read: 1.34 Recursive Call %: 87.43
Rollback per transaction %: 27.56 Rows per Sort: 7.22

Instance Efficiency Percentages
The “Instance Efficiency Percentages” section is very useful. It gives you an overview of your instance health. Anytime you make instance parameter changes you should take a look to see if this affects your instance efficiency in any way. Here is a description of some of the fields (Note, as stated in the statspack report, your goal here is to have these percentages be as close to 100% as possible):
• Buffer Nowait %: This is the percentage of time that the instance made a call to get a buffer (all buffer types are included here) and that buffer was made available immediately (meaning it didn’t have to wait for the buffer…hence “Buffer Nowait”).
• Buffer Hit %: This means that when a request for a buffer took place, the buffer was available in memory and physical disk I/O did not need to take place.
• Library Hit %: If your Library Hit percentage is low it could mean that your shared pool size is too small or that the bind variables are not being used (or at least being used properly).
• Execute to Parse %: This is the formula used to get this percentage:
round(100*(1-parsevalue/executevalue),2)
So, if you run some SQL and it has to be parsed every time you execute it (because no plan exists for this statement) then your percentage would be 0%. The more times that your SQL statement can reuse an existing plan the higher your Execute to Parse ratio is.

One way to increase your parse ratio is to use bind variables. This allows the same plan to be used for multiple SQL statements. The only thing that changes in the SQL is the parameters used in your statement’s WHERE clause. For Java/JDBC Programmers that means using PreparedStatements as opposed to regular Statements.
• Parse CPU to Parse Elapsd %: Generally, this is a measure of how available your CPU cycles were for SQL parsing. If this is low, you may see “latch free” as one of your top wait events.
• Redo NoWait %: You guessed it…the instance didn’t have to wait to use the redo log if this is 100%.
• In-memory Sort %: This means the instance could do its sorts in memory as opposed to doing physical I/O…very good. You don’t want to be doing your sorts on disk…especially in an OLTP system. Try increasing your SORT_AREA_SIZE or PGA_AGGREGATE_TARGET in your spfile/pfile to see if that helps if your in-memory sorting is not between 95% and 100%.
• Soft Parse %: This is an important one…at least for OLTP systems. This means that your SQL is being reused. If this is low (not between 95% and 100%) then make sure that you’re using bind variables in the application and that they’re being used properly.
• Latch Hit %: This should be pretty close to 100%; if it’s not then check out what your top wait events are to try to fix the problem (pay specific attention to ‘latch free’ event).

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.98 In-memory Sort %: 100.00
Library Hit %: 100.04 Soft Parse %: 99.98
Execute to Parse %: 89.87 Latch Hit %: 94.99
Parse CPU to Parse Elapsd %: 75.19 % Non-Parse CPU: 99.46

Top 5 Timed Events (Called “Top 5 Wait Events” in 8i)
This section is crucial in determining what some of the performance drains in your database are. It will actually tell you the amount of time the instance spent waiting. Here are some common reasons for high wait events:
• DB file scattered read: This can be seen fairly often. Usually, if this number is high, then it means there are a lot of full tablescans going on. This could be because you need indexes or the indexes you do have are not not being used.
• DB file sequential read: This could indicate poor joining orders in your SQL or waiting for writes to ‘temp’ space. It could mean that a lot of index reads/scans are going on. Depending on the problem it may help to tune PGA_AGGREGATE_TARGET and/or DB_CACHE_SIZE.
• CPU Time: This could be completely normal. However, if this is your largest wait event then it could mean that you have some CPU intensive SQL going on. You may want to examine some of the SQL further down in the Statspack report for SQL statements that have large CPU Time.
• SQL*Net more data to client: This means the instance is sending a lot of data to the client. You can decrease this time by having the client bring back less data. Maybe the application doesn’t need to bring back as much data as it is.
• log file sync: A Log File Sync happens each time a commit takes place. If there are a lot of waits in this area then you may want to examine your application to see if you are committing too frequently (or at least more than you need to).
• Logfile buffer space: This happens when the instance is writing to the log buffer faster than the log writer process can actually write it to the redo logs. You could try getting faster disks but you may want to first try increasing the size of your redo logs; that could make a big difference (and doesn’t cost much).
• Logfile switch: This could mean that your committed DML is waiting for a logfile switch to occur. Make sure your filesystem where your archive logs reside are not getting full. Also, the DBWR process may not be fast enough for your system so you could add more DBWR processes or make your redo logs larger so log switches are not needed as much.

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
——————————————– ———— ———– ——–
db file sequential read 187,787 906 88.60
SQL*Net more data to client 49,707 57 5.55
CPU time 54 5.33
log file parallel write 1,011 2 .22
latch free 6,226 2 .16
————————————————————-

The SQL Sections (Buffer Gets, Disk Reads, Executions, and Parse Counts)
The following sections show you the Top SQL (or ‘worst performing’ SQL) grouped by four sections: Buffer Gets, Disk Reads, Executions, and Parse Counts. You’ll want to review the top SQL statements in each of these sections to see if they can be tuned better. These sections are a great way to how many times the SQL is being executed, how much CPU time is being used to execute them, and the total time for the statement to execute.
SQL ordered by Parse Calls for DB: ORCL Instance: ORCL Snaps: 4873 -4875
-> End Parse Calls Threshold: 1000

% Total
Parse Calls Executions Parses Hash Value
———— ———— ——– ———-
144,300 144,300 6.65 4199666855
Module: JDBC Thin Client
select parameter, value from nls_session_parameters
Note: If you take the hash value for the SQL statement, then you run the ORACLE_HOME/rdbms/admin/sprepsql.sql script, and enter the hash value when it prompts you it will pull up the Execution Plan for that SQL statement. Pretty Cool!

Instance Activity Stats
This section may provide some insight into some potential performance problems that were not as easily visible from previous sections in the report. This section is also useful when comparing statspack reports from the same timeframes on different days.

Tablespace and Data File I/O Statistics
These sections help give you some visibility into I/O rolled up to the tablespace level and I/O stats on your data files.
Tablespace IO Stats for DB: ORCL Instance: ORCL Snaps: 4873 -4875
->ordered by IOs (Reads + Writes) desc

Tablespace
——————————
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
————– ——- —— ——- ———— ——– ———- ——
UNDOTBS
146 0 5.8 1.0 117,119 16 50,681 1.3
APP1
19,395 3 10.5 1.0 32,613 5 1,886 2.8
INDEX1
36,919 5 0.7 6.3 977 0 526 5.0
APP2
6,969 1 11.7 1.0 13,559 2 2,513 2.5
SYSTEM
15,056 2 0.8 1.8 360 0 13 3.8

I hope this article helps you increase your ability to interpret Statspack Reports as it is an extremely useful tool for the Oracle DBA.

RAC Administration
Cluster Ready Services Administration
• to know the crs services are running use
o #crsctl check crs

• voting disk: it is file that manage the information about node membership

• To retrieve the current voting disk use
o #crsctl query votedisk css

• To Backup voting disk use
o #dd if=voting_disk_name of=backup_file_name

• Recover voting disk by
o #dd if=backupvotingdiskfile of=votingdiskname

• To Multipul Voting disk
o #crsctl add css votedisk path

• to delete voting disk
o #crsctl delete css votedisk path

• OCR: it is file that manage the cluster and RAC database configuration information. it uses ocrconfig, ocrcheck, ocrdump for configuration of OCR. Oracle cluster ware don’t support more than 2 ocr files

• to add ocr location use
o #ocrconfig -replace ocr destination_file or disk_name

• to add ocr mirror location
o #ocrconfig -replace ocrmirror destination_file or disk

• to knwo the online ocr use
o #ocrcheck

• Node to rejoin the cluster after it is restarted use
o #ocrconfig –repair

• to repair ocr if oracle clusterware stopped use
o #ocrconfig -repair ocrmirror device_name

• To remove ocr (alteast one ocr file should be online)
o #ocrconfig -replace ocr

• to remove the mirror ocr
o #ocrconfig -replace ocrmirror
• If node cannot startup and alert log contain CLSD-1009 and CLSD-1011 messages then overwrite the OCR by ocrconfig -overwrite.
• Oracle Clusterware automatically backup ocr every four hours to do manual backup use
o #ocrconfig –manualbackup

• To see the backup use
o #ocrconfig –showbackup

• to view the contents of backed up file use
o #ocrdump -backupfile file_name

• to restore ocr file stop the crs then use
o #ocrconfig -restore backeup_file_name

• to export ocr use
o #ocrconfig -export file_name

• To check the ocr integrity use
o #cluvfy comp ocr -n rac1,rac2 -verbose or ocrcheck

• OCRCHECK AND OCRDUMP CANBE USE to diagnose ocr problem
• to upgrade to downgrade OCR use
o #ocrconfig -upgrade/downgrade

Cluster verifying utility #cluvfy
• It is use to verify the primary cluster components
• to know the list of node
o #cluvfy -n all

• to know the node configured in oracle clusterware use olnnodes parameter
• to verify the system requirement on the nodes before installing OCW
o #cluvfy comp sys [ -n node_list] -p {crs|database} –verbose

• to verfigy the sys req for installing OCW use
o #cluvfy comp sys -n node1,node2 -p crs verbose

• to verify storage use
o #cluvfy comp ssa -n node1,node2 -s storageID_list -verbose
• to discover all of the shared storage system available on sys use
o #cluvfy comp ssa -n all -verbose
o #cluvfy comp ssa -n all -s /dev/sdb -verbose

• to verify the connectivity use
o #cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]
• to verify the connectivity bet the cluster node through all of the available network interface use
o #cluvfy comp nodecon -n node_list [ -i interface_list] [-verbose] OR
o #cluvfy comp nodecon -n all -verbose

• to verify user permission and admin privillages use
o #cluvfy comp admprv [-n node_list] [-verbose] | -o user_equiv/crs_inst/db_inst/db_config -d oracle_home

• To verify that system meet for OCW installation use
o #cluvfy stage -pre crsinst -n node_list -verbose

• To know the OCW functioning properly use
o #cluvfy stage -post crsinst -n node_list -verbose

• to verify that system meet for RAC Installation use
o #cluvfy stage -pre dbinst -n node1,node2 -verbose

• To verify the system meet for database creation or change use
o #cluvfy stage -pre dbcfg -n node1,node2 -d oracle_home -verbose

• To check the integrity of entire cluster use
o #cluvfy comp clu

• to verify all of the clusterware component use
o #cluvfy comp crs -n node_list -verbose

• To verify the individual componanet of cluster manager subcomponanet use
o #cluvfy comp clumgr -n node_list -verbose

• to verify the intergrity of OCR use
o #cluvfy comp ocr
******************************************************************************************************************************************

We can user srvctl to start,stop,status,add,remove,enable,disable an ASM instance.
To add ASM: #srvctl add asm -n node_name -i asm_instance_name -o oracle_home.
To remove the ASM: #srvctl remove asm -n node_name -i asm_instance_name
To enable/disable ASM: #srvctl enable/disable asm -n node_name -i asm_inst_name
To start/stop ASM instance:
#srvctl start/stop asm -n node_name [-i asm_instance_name][-o start_options] [-c connect_str]
To know the status of the ASM: #srvctl status asm -n node_name)
To start or stop DB instance:
#srvctl start/stop instance -d db_name -i instance_name_list [-0 stop_options] [-o connect-str]
To start or stop entire cluster database:
#srvctl stop/start database -d db_name [-o start_options] [-c connect_str]
By default oracle clusterware control database auto start in oracle RAC environments when a system reboot and this can be control by policy, by default there are two policies one is automatic(default) and other is manual.
To know the current policy
#srvctl config database -d db_name -a
#srvctl add/modify database -d db_name -y policy_name
The parameter which should have identical values on all instances are “active_instance_count, archive_lag_target, cluster_database, cluster_database_instance, control-files, DB_domain, db_name, undo_management.
*Oracle used instance_number parameter to distinguish among instances at startup.
**Workload management enable to manange workload distribution to provide High availability and scalibity by services (it can be preferred or available),

1. Connection load balancing: Client side LB, balances the connection request across the listener. Server side LB, listener directs a connect request to the best instance currently providing the service by using LB Advisory. When you create RAC DB using DBCA by default it configures and enable SSLB.
If we configure Transparent Application Failover (TAF) for connection, then oracle moves the session to surviving instance.
TAF can restart a query after failover has completed but for other type of transaction (INSERT, UPDATE,DELETE),we must rollback the failed transaction and resubmit the transaction.
Services simply the deployment of TAF and doesn’t required any changes at client side changes and TAF setting on a service overrides any TAF setting in the client connection definition.
To define a TAF policy.
EXECUTE DBMS_SERVICE.MODIFY_SERVICE (service_name => ‘ksa’
,ag_ha_notifications => true
,failover_method => BASIC
,FAILOVER_TYPE => SELECT
,FAILOVER_RETRIES => 180
,FAILOVER_DELAY => 5
,CLB_GOAL => DBMS_SERVICES.CLB_GOAL_LONG);
Fast Application Notification (FAN): it notify to other processes about configuration and service level status (UP OR DOWN) and takes immediate action. FAN can be used without programmatically changes if you are using integrated Oracle Client ie Oracle db 10g JDBC and ODP.NET OR OCI.
FAN events are published using ONS and oracle streams Advance Queuing.
Oracle RAC HA framework maintains service availability by using OC and resource profiles. Oracle HA framework monitor the database and its services and sends event notification using FAN.
The load balancing advisory provides advice about how to direct incoming work to the instance that provides the best service for that work. Load balancing advisory can be use by defining service level goal for each service for which we want to enable load balancing. There are two type of service level goals.
1. SERVICE TIME: Attempt to direct work request to instance according to response time
2. THROUGHPUT: Attempt to direct work request according to throughput.
EXECUTE DBMS_SERVICE.MODIFY_SERVICE (SERVICE_NAME => ‘OE’
,GOAL => DMBS_SERVICE.GOAL_SERVICE_TIME/Goal_THROUGHPUT,
CLB_GOAL => DMBS_SERVICE.CLB_GOAL_SHORT);

We can retrieve the goal setting for services by using views DBA_SERVICES, V$SERVICES, V$ACTIVE_SERVICES.
To add middle tier node or to update the oracle RACnode, use racgons.
#racgons add_config hostname:port .
Enable distributed transaction processing for services: For services that are going to use for distributed transaction processing, create service using Enterprise Manager, DBCA or SRVCTL and defined only one instance as the preferred instance.
#srvctl add service -d db_name -s serive_name -r RAC01 -a RAC02.
Then mark the service for DTP as
EXECUTE DBMS_SERVICES.MODIFY_SERVICES(SERVICE_NAME => ‘US.ORACLE.COM’, DTP=>TRUE);
SERVICES CAN BE ADMINISTRATED WITH ENTERPISE MANAGER,DBCA,PL/SQL AND SRVCTL
You can create, modify, delete, start and stop the services.
#srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P TAF_policy]

#srvctl start/stop service -d db_unique_name [-s service_name_list][-i inst_name][-o start_options][-c connect_str]
#srvctl enable service -d db_unique_name -s service_name_list -i inst_name

To relocate service from instance1 to instance2 use:
srvctl relocate service -d db_name -s service_name -i instance1 -t instance2
To obtain the status or configuration details of service use
srvctl status service -d db_name -s service_name –a
srvctl status service -d db_name -s RAC –a
srvctl status service -d db_name -s RACXDB –a
srvctl status service -d db_name -s SYS$BACKGROUD –a
srvctl status service -d db_name -s SYS$USERS –a

tkprof

tkprof is one of the most useful utilities available to DBAs for diagnosing performance issues.  It essentially formats a trace file into a more readable format for performance analysis.  The DBA can then identify and resolve performance issues such as poor SQL, indexing, and wait events. 

tkprof has been historically difficult to use for many reasons.  First, the entire process of enabling tracing, finding trace files, and executing the utility against them is a burdensome task.  Once the DBA finally has the trace file output the typical response is “Now what do I do”?  Second, even though tkprof formats the data, it lacks any additional insight needed to remedy the problems revealed.  In fact, problems are not even highlighted, thereby putting more work on the DBA to analyze the output, assess the problems, and determine what to do. 

Why, When tkprof?

The DBA will use tkprof and session tracing when the database or a particular session is having performance problems.  tkprof will generally be used infrequently, when researching a very particular performance issue.  A user may complain that the response time for a session is abysmal compared to the prior week.  Session tracing and tkprof can be used to see exactly what is happening on the database, enabling the DBA to take corrective action.

The utility can also be used to view SQL that is being executed for an application.  In some situations, this will be the only mechanism a DBA will have to view SQL.  These situations include the execution of encrypted PL/SQL code on the database or submission of SQL statements from third party applications.

Analyzing tkprof  Results

So what should DBAs be looking for?  Here’s a small checklist of items to watch for in tkprof formatted files: 

  • Compare the number of parses to number of executions.  A well-tuned system will have one parse per n executions of a statement and will eliminate the re-parsing of the same statement.  
  • Search for SQL statements that do not use bind variables (:variable).  These statements should be modified to use bind variables.
  • Identify those statements that perform full table scans, multiple disk reads, and high CPU consumption.  These performance benchmarks are defined by the DBA and need to be tailored to each database.  What may be considered a high number of disk reads for an OLTP application may not even be minimal for a data warehouse implementation.

The tkprof process will be explained in six easy steps.

Step 1: Check the Environment

Before tracing can be enabled, the environment must first be configured by performing the following steps:

  • Enable Timed Statistics – This parameter enables the collection of certain vital statistics such as CPU execution time, wait events, and elapsed times.  The resulting trace output is more meaningful with these statistics.  The command to enable timed statistics is:

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;  

  • Check the User Dump Destination Directory – The trace files generated by Oracle can be numerous and large.  These files are placed by Oracle in user_dump_dest directory as specified in the init.ora.  The user dump destination can also be specified for a single session using the alter session command.  Make sure that enough space exists on the device to support the number of trace files that you expect to generate.

SQL> select value

         from v$parameter

         where name = ‘user_dump_dest’;

VALUE

———————————

C:\oracle9i\admin\ORCL92\udump

Once the directory name is obtained, the corresponding space command (OS dependent) will report the amount of available space.  Delete unwanted trace files before starting a new trace to free up the disk space. 

Step 2: Turn Tracing On

The next step in the process is to enable tracing.  By default, tracing is disabled due to the burden (5-10%) it places on the database.  Tracing can be defined at the session level:

ALTER SESSION SET SQL_TRACE = TRUE;

DBMS_SESSION.SET_SQL_TRACE(TRUE);

 

A DBA may enable tracing for another user’s session by:

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);

where the sid (Session ID) and serial# can be obtained from the v$session view.  This package is owned by the SYS user and therefore the executor must be SYS or be granted EXECUTE the privilege by SYS user. 

Once tracing is enabled, Oracle generates and stores the statistics in the trace file.    The trace file name is version specific.   Table 5.1 below contains the version naming conventions for foreground processes.

Version Naming Convention Example
7.3.4 snnn_pid.trc s000_4714.trc
8.0.5 ora_pid_trc ora_2345.trc
8.1.7 ora_pid_instance.trc ora_13737_asgard81.trc
9.x instance_ora_pid.trc asgard91_ora_15313.trc

Table 5.1 – Oracle Trace File Naming Conventions

Supplied with this information, the DBA can construct a query that will return the trace file for a given session or for all sessions attached to the database.  The query below (Users_Trace_Files.sql) will show the trace file name for each process.

< users_trace_files.sql 

column username format a10

column trace_file format a70

select b.username, c.value || ‘\’ || lower(d.value) || ‘_ora_’ ||

       to_char(a.spid, ‘fm00000’) || ‘.trc’ “TRACE_FILE”

from v$process a, v$session b, v$parameter c, v$parameter d

where a.addr   = b.paddr

and c.name     = ‘user_dump_dest’

and d.name     = ‘db_name’

and b.username is not null;

USERNAME   TRACE_FILE

———- ——————————————————–

SYS        C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc

SCOTT      C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_02264.trc

DAVE       C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03578.trc

 

Notice that the trace files are for each session and not for each named user.  Given that the SYS user has two connections to the database, the commands for each session would be in separate trace files.

The query can be modified to return the file name for the currently connected session. The script below will return the file name for the current session.

select c.value || ‘\’ || lower(d.value) || ‘_ora_’ ||

       to_char(a.spid, ‘fm00000’) || ‘.trc’ “TRACE FILE”

from v$process a, v$session b, v$parameter c, v$parameter d

where a.addr = b.paddr

and b.audsid = userenv(‘sessionid’)

and c.name   = ‘user_dump_dest’

and d.name   = ‘db_name’;

TRACE FILE

—————————————————————

C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc

Both queries above generate the trace file names (with Oracle9i on Windows XP) that would exist if the session were to be traced.  However, there is no indication in any V$ view that a session is currently being traced.  The only way to really know if tracing is being performed is to inspect the file names and dates in user_dump_dest directory.  For this reason, a DBA should not trace a session indefinitely, as it will continue to consume both performance resources and file system resources.  If the user_dump_dest directory fills, the Oracle database will come to a screeching halt.

When the DBA determines that enough data has been gathered, the next step is to disable tracing.

Step 3: Turn Tracing Off

The same options that we use to enable tracing are used to disable it.  These include:

ALTER SESSION SET SQL_TRACE = FALSE;

DBMS_SESSION.SET_SQL_TRACE(FALSE);

To disable tracing for another user’s session use:

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,false);

This process is a perfect candidate for automation.  The code below (start_trace.sql) creates a stored procedure that automates all the steps discussed so far. It can also serve as a wrapper for the standard methods of enabling tracing.  Start_trace.sql accepts the sid and serial# for the session that needs tracing.  It requires that a time interval, in seconds, be set  to run the trace so that it doesn’t run perpetually and bog down the session.  When the time has elapsed, it will disable tracing for the session and send the relevant trace information: user, time, and trace file name. 

< start_trace.sql 

create or replace procedure start_trace

 (

   v_sid     in number,

   v_serial# in number,

   seconds   in number)

———————————————

— 2003 – Oracle Utilities

— D. Moore —

— This procedure serves as a wrapper to 

— session tracing.  It accepts 

— a sid and serial#, along with the amount of

— time in seconds that the trace should last.

— The trace will be stopped when that time

— period expires.  After tracing is turned

— off, the name of the trace file will be

— displayed.

———————————————

IS

   v_user           varchar2 (32); 

   stop_trace_cmd   varchar2 (200);

   duration         number;

   v_spid           number;

   dump_dest        varchar2 (200);

   db_name          varchar2 (32);

   v_version        varchar2 (32);

   v_compatible     varchar2 (32);

   file_name        varchar2 (32);

   no_session_found exception;

BEGIN

    begin

      select a.username, b.spid into v_user,v_spid 

         from v$session a, v$process b 

         where a.sid     = v_sid  and

               a.serial# = v_serial# and

               a.paddr   = b.addr;

      exception

      when NO_DATA_FOUND then

         raise no_session_found;

     

    end; 

    dbms_system.set_sql_trace_in_session(v_sid,v_serial#,true);

    dbms_output.put_line(‘Tracing Started for User: ‘

                 || v_user);

    dbms_output.put_line(‘Tracing Start Time: ‘

                 || TO_CHAR(SYSDATE, ‘MM-DD-YYYY HH24:MI:SS’));  

              —————————————————

    — Sleep for the amount of seconds specified as 

    — seconds input parameter.  When complete, stop

    — the tracing and display the resulting trace file

    — name

    —————————————————

    if seconds is null then

       duration := 60;

    else

       duration := seconds;

    end if;

    dbms_lock.sleep(duration);

    — the time alotted has now expired. Disable

    — tracing and output the trace file information

         dbms_system.set_sql_trace_in_session(v_sid,v_serial#,false);

    dbms_output.put_line (‘Tracing Stop Time: ‘

                 || TO_CHAR(SYSDATE, ‘MM-DD-YYYY HH24:MI:SS’));  

        — get all of the data needed to format the trace file name

    select value into dump_dest

       from v$parameter

       where name = ‘user_dump_dest’;

    select value into db_name

       from v$parameter

       where name = ‘db_name’;

    — we need the version of the database in order to determine

    — the naming scheme for the trace file

    dbms_utility.db_version(v_version, v_compatible);       

    if substr(v_version,1,1) = ‘9’ then

       file_name := db_name || ‘_ora_’ || v_spid || ‘.trc’;

    elsif substr(v_version,1,3) = ‘8.1’ then

       file_name := ‘ora_’ || v_spid || ‘_’ || db_name || ‘.trc’;

    elsif substr(v_version,1,3) = ‘8.0’ then

       file_name := ‘ora_’ || v_spid || ‘.trc’;

    end if;

    dbms_output.put_line(‘Trace Directory: ‘ || dump_dest);

    dbms_output.put_line(‘Trace Filename: ‘ || file_name);

    exception

      when no_session_found then

         dbms_output.put_line(‘No session found for sid and serial#                              specified’);

 

END start_trace;

The output from start_trace.sql is displayed below.  The time interval specified was 30 and we can see the elapsed time of the trace in the timestamps below.

SQL> exec start_trace(17, 6157, 30);

Tracing Started for User: SCOTT

Tracing Start Time: 12-26-2002 14:55:12

Tracing Stop Time: 12-26-2002 14:55:42

Trace Directory: C:\oracle9i\admin\ORCL92\udump

Trace Filename: ORCL92_ora_5472.trc

The next step is to run tkprof against the trace file.  

Step 4: Locate Trace File and Execute tkprof

Locating the file is easy because the script above gives us the file name.  tkprof will format the raw trace file, although the file is somewhat readable without tkprof

Raw Trace File

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535446373886 hv=159129

656 ad=’12cbbe70′

 select * from employee where emp_id = 87933

END OF STMT

PARSE #1:c=0,e=37469,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=535446373874

EXEC #1:c=0,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535446375834

FETCH #1:c=31250,e=42564,p=10,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535446418910

FETCH #1:c=0,e=3852,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535446424026

STAT #1 cnt=1 pid=0 pos=1 obj=30497 op=’TABLE ACCESS FULL EMPLOYEE ‘

=====================

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535448474894 hv=159129

656 ad=’12cbbe70’

 select * from employee where emp_id = 87933

END OF STMT

PARSE #1:c=0,e=146,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535448474882

EXEC #1:c=0,e=76,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535448476767

FETCH #1:c=31250,e=30553,p=12,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535448507870

FETCH #1:c=15625,e=3832,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535448512927

STAT #1 cnt=1 pid=0 pos=1 obj=30497 op=’TABLE ACCESS FULL EMPLOYEE ‘=====================

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535449209407 hv=159129

656 ad=’12cbbe70’

 select * from employee where emp_id = 87933

END OF STMT

PARSE #1:c=0,e=111,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449209395

EXEC #1:c=0,e=74,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449211302

FETCH #1:c=31250,e=32623,p=8,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535449244513

FETCH #1:c=15625,e=3918,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535449249648

STAT #1 cnt=1 pid=0 pos=1 obj=30497 op=’TABLE ACCESS FULL EMPLOYEE ‘=====================

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535449801444 hv=159129

656 ad=’12cbbe70’

 select * from employee where emp_id = 87933

END OF STMT

PARSE #1:c=0,e=102,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449801433

EXEC #1:c=0,e=74,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535449803310

FETCH #1:c=31250,e=31503,p=7,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535449835358

FETCH #1:c=15625,e=4039,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535449840721

STAT #1 cnt=1 pid=0 pos=1 obj=30497 op=’TABLE ACCESS FULL EMPLOYEE ‘=====================

PARSING IN CURSOR #1 len=44 dep=0 uid=59 oct=3 lid=59 tim=535450369301 hv=159129

656 ad=’12cbbe70’

 select * from employee where emp_id = 87933

END OF STMT

PARSE #1:c=0,e=101,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535450369290

EXEC #1:c=0,e=76,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=535450371203

FETCH #1:c=15625,e=28362,p=5,cr=416,cu=0,mis=0,r=1,dep=0,og=4,tim=535450400245

FETCH #1:c=15625,e=4333,p=0,cr=57,cu=0,mis=0,r=0,dep=0,og=4,tim=535450405578

STAT #1 cnt=1 pid=0 pos=1 obj=30497 op=’TABLE ACCESS FULL EMPLOYEE ‘=====================

With minimal effort, a programmer could create a trace file parser and formatter similar to tkprof that provides the trace data in a format even more suitable for analysis.

The tkprof command can now be executed from the operating system prompt.

 

C:\oracle9i\admin\ORCL92\udump>tkprof ORCL92_ora_3064.trc output.txt insert=tkprof.sql record=Allsql.sql

tkprof: Release 9.2.0.1.0 – Production on Thu Dec 26 13:22:29 2002

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Based on the command above, tkprof will process the file ORCL92_ora_3064.trc and format the results in the file output.txt.  Two other files were also created (tkprof.sql, allsql.sql) that will be discussed later.

Step 5: Analyze tkprof Output

This is the most difficult step in the process.  Each tkprof output file contains a header, body, and summary section.  The header simply displays the trace file name, definitions, and sort options selected.  The body contains the performance metrics for SQL statements.  The summary section contains an aggregate of performance statistics for all SQL statements in the file. 

tkprof Output

tkprof: Release 9.2.0.1.0 – Production on Tue Dec 24 15:32:43 2002

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Trace file: ORCL92_ora_3064.trc

Sort options: default

********************************************************************************count    = number of times OCI procedure was executed

cpu      = cpu time in seconds executing

elapsed  = elapsed time in seconds executing

disk     = number of physical reads of buffers from disk

query    = number of buffers gotten for consistent read

current  = number of buffers gotten in current mode (usually for update)

rows     = number of rows processed by the fetch or execute call

********************************************************************************

select *

from

 employee where emp_id = 87933

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse       10      0.00       0.03          0          0          0           0

Execute     10      0.00       0.00          0          0          0           0

Fetch       20      0.34       0.35         72       4730          0          10

——- ——  ——– ———- ———- ———- ———-  ———-

total       40      0.34       0.39         72       4730          0          10

Misses in library cache during parse: 1

Optimizer goal: CHOOSE

Parsing user id: 59

Rows     Row Source Operation

——-  —————————————————

      1  TABLE ACCESS FULL EMPLOYEE

********************************************************************************

The output displays a table of performance metrics after each unique SQL statement.  Each row in the table corresponds to each of the three steps required in SQL processing.

1.   Parse – The translation of the SQL into an execution plan.  This step includes syntax checks, permissions, and all object dependencies.

2.   Execute – The actual execution of the statement.

3.   Fetch – The number of rows returned for a SELECT statement.

The table columns include the following:

  • Count – The number of times a statement was parsed, executed, or fetched.
  • CPU – The total CPU time in seconds for all parse, execute, or fetch calls.
  • Elapsed – Total elapsed time in seconds for all parse, execute, or fetch calls.
  • Disk – The number of physical disk reads from the datafiles for all parse, execute, or fetch calls.
  • Query – The number of buffers retrieved for all parse, execute, or fetch calls.
  • Current – The number of buffers retrieved in current mode (INSERT, UPDATE, or DELETE statements).

Observe from the tkprof output above that the SQL statement performed a TABLE ACCESS FULL, meaning a full-table scan.  Full-table scans can degrade performance, especially when accessing a small subset of the data in a table.  In this case, the query is selecting one row, yet all 100,000 rows in the table are scanned.  This is a perfect situation to add an index on the EMP_ID column of the EMPLOYEE table:

SQL> CREATE INDEX emp_idx1 ON employee (emp_id);

Index created.

Let’s examine the performance of this query again, this time with the index enabled. 

select *

from

 employee where emp_id = 87933

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse        1      0.03       0.05          1          1          0           0

Execute      1      0.00       0.00          0          0          0           0

Fetch        2      0.00       0.03          3          4          0           1

——- ——  ——– ———- ———- ———- ———-  ———-

total        4      0.03       0.09          4          5          0           1

Misses in library cache during parse: 1

Optimizer goal: CHOOSE

Parsing user id: 59

Rows     Row Source Operation

——-  —————————————————

      1  TABLE ACCESS BY INDEX ROWID EMPLOYEE

      1   INDEX RANGE SCAN EMP_IDX1 (object id 30498)

********************************************************************************

The CPU speed improved by a multiple of 11 (.03 vs. .34) compared to the benchmark before the index was added.

Step 6: Load tkprof Results into Tables

Loading tkprof data into the database is optional, but it can be worthwhile for those DBAs that want historical data or the ability to access  data via SQL queries to generate reports.  The command used earlier specified insert=tkprof.sql which generated the following SQL in tkprof.sql

CREATE TABLE  tkprof_table

(

 date_of_insert                       DATE

,cursor_num                           NUMBER

,depth                                NUMBER

,user_id                              NUMBER

,parse_cnt                            NUMBER

,parse_cpu                            NUMBER

,parse_elap                           NUMBER

,parse_disk                           NUMBER

,parse_query                          NUMBER

,parse_current                        NUMBER

,parse_miss                           NUMBER

,exe_count                            NUMBER

,exe_cpu                              NUMBER

,exe_elap                             NUMBER

,exe_disk                             NUMBER

,exe_query                            NUMBER

,exe_current                          NUMBER

,exe_miss                             NUMBER

,exe_rows                             NUMBER

,fetch_count                          NUMBER

,fetch_cpu                            NUMBER

,fetch_elap                           NUMBER

,fetch_disk                           NUMBER

,fetch_query                          NUMBER

,fetch_current                        NUMBER

,fetch_rows                           NUMBER

,ticks                                NUMBER

,sql_statement                        LONG

);

INSERT INTO tkprof_table values

(

  SYSDATE, 1, 0, 59, 0, 0, 0, 0, 0, 0, 0

, 1, 0, 192, 0, 0, 0, 1, 0

, 0, 0, 0, 0, 0, 0, 0, 4294966155

, ‘alter session set sql_trace=true

‘);

INSERT INTO tkprof_table VALUES

(

  SYSDATE, 2, 1, 0, 1, 0, 1232, 0, 0, 0, 1

, 1, 0, 745, 0, 0, 0, 0, 0

, 1, 0, 115, 0, 3, 0, 1, 17866289

, ‘select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, spare

2 from obj$ where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or r

emoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is

null)and(subname=:6 or subname is null and :6 is null)

‘);

INSERT INTO tkprof_table VALUES

(

  SYSDATE, 3, 1, 0, 1, 0, 1400, 0, 0, 0, 1

, 1, 0, 658, 0, 0, 0, 0, 0

, 1, 0, 131, 0, 3, 0, 1, 5463

, ‘select ts#,file#,block#,nvl(bobj#,0),nvl(tab#,0),intcols,nvl(clucols,0),audit

$,flags,pctfree$,pctused$,initrans,maxtrans,rowcnt,blkcnt,empcnt,avgspc,chncnt,a

vgrln,analyzetime, samplesize,cols,property,nvl(degree,1),nvl(instances,1),avgsp

c_flb,flbcnt,kernelcols,nvl(trigflag, 0),nvl(spare1,0),nvl(spare2,0),spare4,spar

e6 from tab$ where obj#=:1

‘);

INSERT INTO tkprof_table VALUES

(

  SYSDATE, 4, 1, 0, 2, 0, 1110, 0, 0, 0, 1

, 2, 15625, 757, 0, 0, 0, 0, 0

, 2, 0, 221, 0, 6, 0, 2, 8966

, ‘select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,NVL(

lists,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0) from seg$ where

ts#=:1 and file#=:2 and block#=:3

‘);

INSERT INTO tkprof _table VALUES

(

  SYSDATE, 4, 1, 0, 1, 0, 1802, 0, 0, 0, 1

, 1, 0, 1089, 0, 0, 0, 0, 0

, 2, 0, 489, 0, 5, 0, 1, 23441

, ‘select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags, i.property,i.

pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.distkey, i.lblkkey,i.dblkkey

,i.clufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#, nvl(i.degree,1),nvl(i.in

stances,1),i.rowcnt,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0

),nvl(c.deferrable#+c.valid#,0), nvl(i.spare1,i.intcols),i.spare4,spare2,spare6,

 decode(i.pctthres$,null,null, mod(trunc(i.pctthres$/256),256)) from ind$ i, (se

lect enabled, min(cols) unicols, min(to_number(bitand(defer,1))) deferrable#, mi

n(to_number(bitand(defer,4))) valid# from cdef$ where obj#=:1 and enabled > 1 gr

oup by enabled) c where i.obj#=c.enabled(+) and i.bo#=:1

‘);

INSERT INTO tkprof _table VALUES

(

  SYSDATE, 5, 1, 0, 1, 0, 910, 0, 0, 0, 1

, 1, 0, 573, 0, 0, 0, 0, 0

, 2, 0, 147, 0, 3, 0, 1, 5409

, ‘select pos#,intcol#,col#,spare1,bo#,spare2 from icol$ where obj#=:1

‘);

INSERT INTO tkprof _table VALUES

(

  SYSDATE, 6, 1, 0, 1, 15625, 1426, 0, 0, 0, 1

, 1, 0, 775, 0, 0, 0, 0, 0

, 6, 0, 1744, 0, 3, 0, 5, 10773

, ‘select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl

(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,18

3,scale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,pro

perty, nvl(charsetid,0),nvl(charsetform,0),spare1,spare2,nvl(spare3,0) from col$

 where obj#=:1 order by intcol#

‘);

INSERT INTO tkprof _table VALUES

(  SYSDATE, 8, 1, 0, 1, 0, 831, 0, 0, 0, 1

, 1, 0, 597, 0, 0, 0, 0, 0, 1, 0, 59, 0, 1, 0, 0, 5736

, ‘select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where robj#=:1’);

INSERT INTO tkprof _table VALUES

(  SYSDATE, 9, 1, 0, 1, 0, 973, 0, 0, 0, 1

, 1, 0, 650, 0, 0, 0, 0, 0, 1, 0, 43, 0, 2, 0, 0, 5050

, ‘select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,nvl(enabled,0)

,rowid,cols,nvl(defer,0),mtime,nvl(spare1,0) from cdef$ where obj#=:1’);

INSERT INTO tkprof _table VALUES

(  SYSDATE, 1, 0, 59, 1, 31250, 58068, 1, 1, 0, 1

, 1, 0, 85, 0, 0, 0, 0, 0, 2, 0, 37301, 3, 4, 0, 1, 39511

, ‘ select * from employee where emp_id = 87933);

INSERT INTO tkprof _table VALUES

(  SYSDATE, 2, 1, 0, 2, 0, 1122, 0, 0, 0, 1, 2, 0, 672, 0, 0, 0, 0, 0

, 2, 0, 178, 0, 6, 0, 2, 12416444

, ‘select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.datao

bj#,o.flags from obj$ o where o.obj#=:1’);

INSERT INTO tkprof _table VALUES(  SYSDATE, 1, 0, 59, 1, 0, 353, 0, 0, 0, 1

, 1, 0, 148, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1160, ‘alter session set sql_trace=false

‘);

This file contains the DDL to create the table as well as the data to load.  If the table already exists, the error will be ignored when it tries to create the table again.

tkprof  Command Line Options

tkprof provides many useful command line options that provide additional functionality for the DBA.

  • print – Lists only the first n SQL statements in the output file.  If nothing is specified, all statements will be listed.  Use this option when the list needs to be limited to the “Top n” statements.  This is useful when combined with a sorting option to enable the top n statements by CPU, or disk reads, or parses, etc.
  • aggregate – When “Yes”, tkprof will combine the statistics from multiple user executions of the same SQL statement.  When “No”, the statistics will be listed each time the statement is executed.
  • insert – Creates a file that will load the statistics into a table in the database for further processing.  Choose this option if you want to perform any advanced analysis of the tkprof output.
  • sys – Enables or disables the inclusion of SQL statements executed by the SYS user, including recursive SQL statements.  The default is to enable. 
  • table – Used in the Explain Plan command (if specified) for Oracle to load data temporarily into an Oracle table.  The user must specify the schema and table name for the plan table.  If the table exists all rows will be deleted otherwise tkprof will create the table and use it.
  • record – creates a SQL script with the specified filename that contains all non-recursive SQL statements from the trace file.  For DBAs wanting to log the SQL statements in a separate file, this is the option to use.  In the example earlier, the contents of the Allsql.sql file include:

alter session set sql_trace=true ;

select * from employee where emp_id = 87933 ;

alter session set sql_trace=false ;

  • explain – Executes an Explain Plan for each statement in the trace file and displays the output.  Explain Plan is less useful when used in conjunction with tkprof than it is when used alone.  Explain Plan provides the predicted optimizer execution path without actually executing the statement.  tkprof shows you the actual execution path and statistics after the statement is executed.  In addition, running Explain Plan against SQL statements that were captured and saved is always problematic given dependencies and changes in the database environment.
  • sort – Sorts the SQL statements in the trace file by the criteria deemed most important by the DBA.  This option allows the DBA to view the SQL statements that consume the most resources at the top of the file, rather than searching the entire file contents for the poor performers.  The following are the data elements available for sorting:
  • prscnt – The number of times the SQL was parsed.
  • prscpu – The CPU time spent parsing.
  • prsela – The elapsed time spent parsing the SQL.
  • prsdsk – The number of physical reads required for the parse.
  • prsmis – The number of consistent block reads required for the parse.
  • prscu – The number of current block reads required for the parse.
  • execnt – The number of times the SQL statement was executed. 
  • execpu – The CPU time spent executing the SQL. 
  • exeela – The elapsed time spent executing the SQL.
  • exedsk – The number of physical reads during execution.
  • exeqry – The number of consistent block reads during execution.
  • execu – The number of current block reads during execution.
  • exerow – The number of rows processed during execution.
  • exemis – The number of library cache misses during execution.
  • fchcnt – The number of fetches performed.
  • fchcpu – The CPU time spent fetching rows.
  • fchela – The elapsed time spent fetching rows.
  • fchdsk – The number of physical disk reads during the fetch.
  • fchqry – The number of consistent block reads during the fetch.
  • fchcu – The number of current block reads during the fetch.
  • fchrow – The number of rows fetched for the query.

Many sort options exist, however some are more useful than others.  Execnt, execpu, exedsk and prscnt are the most useful sort parameters when formatting trace output with tkprof, because they are more indicative of most SQL performance issues.  The execution counts are most indicative of performance issues and therefore should bubble to the top.  In particular, this is true of the SQL statement that used the most CPU – execpu.  The prscnt parameter is important because it shows the SQL statements that are parsed most, usually a result of not using bind variables.

SQL Execution Statistics in 9i

The SQL tuning process prior to 9.2 involved executing SQL commands, then OS commands, and then SQL commands again.  This is a very time-consuming and burdensome process.  In 9.2, Oracle decided to retain the SQL metrics for each statement in the SGA (library cache) while the statement remains cached.  The DBA could then diagnose SQL issues at a SQL prompt and leave tkprof alone.  This is a vast improvement over prior versions.

Oracle9.2 contains the following views that enable the DBA to identify SQL issues directly from a SQL prompt.  These views should be used to periodically check SQL statistics and full-table scans, alerting the DBA to problem areas requiring corrective action.

  • v$sql_plan  This view shows the same information as shown by Explain Plan except it is the actual execution plan and not the predicted one – just like tkprof and even better than Explain Plan.
  • v$sql_plan_statistics  – This view contains the execution statistics for each operation (step) in the v$sql_plan.  Queries should access this view and look for poor SQL operations including TABLE ACCESS FULL – full-table scans.
  • v$sql_plan_statistics_all  – This view combines data from v$sql_plan, v$sql_plan_statistics and v$sql_workarea

Both v$sql_plan_statistics and v$sql_plan_statistics_all are not populated by default.  The option statistics_level=all must be set.  

Best Practices for Using tkprof

Enable tracing only on those sessions that are having problems.  Be selective to minimize the performance burden on the sessions and to retain more free space in the user dump destination directory.

Rename trace files after tracing is disabled.  The new file name should be something more meaningful that will be recognizable at a later date.   The name employee_index.trc is much more meaningful than ORCL92_ora_3172.trc

Delete trace files that are no longer needed to reduce clutter and free disk space.

Explain Plan is not as useful when used in conjunction with tkprof since the trace file contains the actual execution path of the SQL statement.  Use Explain Plan when anticipated execution statistics are desired without actually executing the statement.

When tracing a session, remember that nothing in v$session indicates that a session is being traced.  Therefore, trace with caution and remember to disable tracing after an adequate amount of trace data has been generated.

tkprof does not control the contents of a trace file, it simply formats them.  Oracle provides multiple ways to actually generate the trace file.   tkprof is valuable for detailed trace file analysis.  For those DBAs that pefer a simpler tracing mechanism with instant feedback, the autotrace utility should be used.

 To take tkprof to the next level, consider using Trace Analyzer.

Automatic Diagnostic Repository.
ADR is file based repository for diagnostic data like trace file,process dump,data structure dump etc. In oracle 11g trace. alert not saved in *_DUMP_DEST directory even you set those parameters in init.ora.11g ignore *_DUMP_DEST and store data in new format.

New initialize parameter DIAGNOSTIC_DEST decide location of ADR root,

Database Capture/replay database workloads :-
Allows the total database workload to be captured, transferred to a test database created from a backup or standby database, then replayed to test the affects of an upgrade or system change. Currently, they are working to a capture performance overhead of 5%, so you could conceivably capture real production workloads.

Automatic Memory Tuning:-
Automatic PGA tuning was introduced in Oracle 9i.
Automatic SGA tuning was introduced in Oracle 10g.
In 11g, all memory can be tuned automatically by setting one parameter. You literally tell Oracle how much memory it has and it determines how much to use for PGA, SGA and OS Processes. Maximum and minimum thresholds can be set.
We need to set parameter MEMORY_TARGET and MEMORY_MAX_TARGET where are sga_target and pga_aggregated_target should be zero

Database Health Check:-
Oracle introducee a new feature in 11g for monitoring the database health named as health monitor. With health monitor oracle automatically run a diagnostic operation to check any database corruption or error.

Data Recovery Advisor:-
Data Recovery Advisor is an Oracle Database 11g tool that automatically diagnoses data failures, determines and presents appropriate repair options, and executes repairs at the user’s request. Data Recovery Advisor can diagnose failures such as the following:
Inaccessible components like datafiles and control files.
Physical corruptions such as block checksum failures and invalid block header
Field values,Inconsistent datafiles (online and offline)
I/O failures
The advisor however doe not recover from failures on standby databases or RAC environment. This advisor can be used through RMAN or the Enterprise Manager.

Oracle Flashback-Related New Features
Oracle Database 11g introduces the following new Flashback-related features:
LogMiner Interface in Oracle Enterprise Manager
•Oracle Flashback Transaction Backout
•Oracle Flashback Data Archives

Case sensitive password :-
In 11g Passwords are case sensitive by default, new init.ora parameter sec_case_sensitive_logon is introduce in 11g . it values could be.
sec_case_sensitive_logon = (TRUE FALSE)
On system level you can switch off
alter system set sec_case_sensitive_logon = false
Find users who have case sensitive or case insensitive passwords

SQL> SELECT USERNAME,PASSWORD_VERSIONS FROM DBA_USERS — Password file orapwd file=orclpwd password=manager ignorecase=y

SQL Performance Analyzer (SPA):-                                                                                                                                                                 It is a new feature in 11g. The SPA is used to measure the performance of SQL statement execution impact due to changes in database. If it find out the degradation in one or more sql sttmts performance then it will give recommendations on how to improve their performance.

Active Database Duplication
Now DUPLICATE command is network aware i.e.we can create a duplicate or standby database over the network without taking backup or using old backup.

RMAN UNDO bypass : Rman backup can bypass undo. Undo tablespaces are getting huge, but contain lots of useless information. Now rman can bypass those types of tablespaces
Fast incremental backups on physical standby database in 11g we can enable block change tracking on a physical standby database (ALTER DATABASE ENABLE/ DISABLE BLOCK CHANGE TRACKING SQL statement).This new 11g feature enables faster incremental backups on a physical standby database than in previous releases. because RMAN identify the changed blocks since the last incremental backup.

Improved block media recovery performance:-
RECOVER command can recover individual data blocks.
RMAN take older, uncorrupted blocks from flashback and the RMAN can use these blocks, thereby speeding up block media recovery.

ADDM RAC Enhancements:-
In case of RAC script addmrpt.sqlrun give report for single instance, not report of all instance in RAC. But using DBMS_ADDM , we can generate report for all instance of RAC.

Oracle 10g Release 2 (10.2.0) – September 2005

  • Transparent Data Encryption
  • Async commits
  • CONNECT ROLE can not only connect
  • Passwords for DB Links are encrypted
  • New asmcmd utility for managing ASM storage

Oracle 10g Release 1 (10.1.0)

  •  Grid computing – an extension of the clustering feature (Real Application Clusters)
  • Manageability improvements (self-tuning features)
  • Performance and scalability improvements
  • Automated Storage Management (ASM)
  • Automatic Workload Repository (AWR)
  • Automatic Database Diagnostic Monitor (ADDM)
  • Flashback operations available on row, transaction, table or database level
  • Ability to UNDROP a table from a recycle bin
  • Ability to rename tablespaces
  • Ability to transport tablespaces across machine types (E.g Windows to Unix)
  • New ‘drop database’ statement
  • New database scheduler – DBMS_SCHEDULER
  • DBMS_FILE_TRANSFER Package
  • Support for bigfile tablespaces that is up to 8 Exabytes in size
  • Data Pump – faster data movement with expdp and impdp.

Oracle 9i Release 2 (9.2.0)

  • Locally Managed SYSTEM tablespaces
  • Oracle Streams – new data sharing/replication feature (can potentially replace Oracle Advance Replication and Standby Databases)
  • XML DB (Oracle is now a standards compliant XML database)
  • Data segment compression (compress keys in tables – only when loading data)
  • Cluster file system for Windows and Linux (raw devices are no longer required).
  • Create logical standby databases with Data Guard
  • Java JDK 1.3 used inside the database (JVM)
  • Oracle Data Guard Enhancements (SQL Apply mode – logical copy of primary database, automatic failover
  • Security Improvements – Default Install Accounts locked, VPD on synonyms, AES, Migrate Users to Directory

Oracle 9i Release 1 (9.0.1) – June 2001 

  • Traditional rollback segments (RBS) are still available, but can be replaced with automated System Managed Undo (SMU). Using SMU, Oracle will create it’s own “Rollback Segments” and size them automatically without any DBA involvement.
  • Flashback query (dbms_flashback.enable) – one can query data as it looked at some point in the past. This feature will allow users to correct wrongly committed transactions without contacting the DBA to do a database restore.
  • Use Oracle Ultra Search for searching databases, file systems, etc. The UltraSearch crawler fetch data and hand it to Oracle Text to be indexed.
  • Oracle Nameserver is still available, but deprecate in favour of LDAP Naming (using the Oracle Internet Directory Server). A nameserver proxy is provided for backwards compatibility as pre-8i client cannot resolve names from an LDAP server.
  • Oracle Parallel Server’s (OPS) scalability was improved – now called Real Application Clusters (RAC). Full Cache Fusion implemented. Any application can scale in a database cluster. Applications doesn’t need to be cluster aware anymore.
  • The Oracle Standby DB feature renamed to Oracle Data Guard. New Logical Standby databases replay SQL on standby site allowing the database to be used for normal read write operations. The Data Guard Broker allows single step fail-over when disaster strikes.
  • Scrolling cursor support. Oracle9i allows fetching backwards in a result set.
  • Dynamic Memory Management – Buffer Pools and shared pool can be resized on-the-fly. This eliminates the need to restart the database each time parameter changes were made.
  • On-line table and index reorganization.
  • VI (Virtual Interface) protocol support, an alternative to TCP/IP, available for use with Oracle Net (SQL*Net). VI provides fast communications between components in a cluster.
  • Build in XML Developers Kit (XDK). New data types for XML (XMLType), URI’s, etc. XML integrated with AQ.
  • Cost Based Optimizer now also consider memory and CPU, not only disk access cost as before.
  • PL/SQL programs can be natively compiled to binaries.
  • Deep data protection – fine grained security and auditing. Put security on DB level. SQL access do not mean unrestricted access.
  • Resumable backups and statements – suspend statement instead of rolling back immediately.
  • List Partitioning – partitioning on a list of values.
  • ETL (eXtract, transformation, load) Operations – with external tables and pipelining.
  • OLAP – Express functionality included in the DB.
  • Data Mining – Oracle Darwin’s features included in the DB.

 

Oracle 8i (8.1.7)

 Static HTTP server included (Apache)

  • JVM Accelerator to improve performance of Java code
  • Java Server Pages (JSP) engine
  • MemStat – A new utility for analyzing Java Memory footprints
  • OIS – Oracle Integration Server introduced.
  • PLSQL Gateway introduced for deploying PL/SQL based solutions on the Web
  • Enterprise Manager Enhancements – including new HTML based reporting and Advanced Replication functionality included.
  • New Database Character Set Migration utility included.

Oracle 8i (8.1.6) 

  • PL/SQL Server Pages (PSP’s)
  • DBA Studio Introduced
  • Statspack
  • New SQL Functions (rank, moving average)
  • ALTER FREELISTS command (previously done by DROP/CREATE TABLE)
  • Checksums always on for SYSTEM tablespace allowing many possible corruptions to be fixed before writing to disk
  • XML Parser for Java
  • New PLSQL encrypt/decrypt package introduced
  • User and Schemas separated
  • Numerous Performance Enhancements

Oracle 8i (8.1.5)

 Fast Start recovery – Checkpoint rate auto-adjusted to meet roll forward criteria

  • Reorganize indexes/index only tables which users accessing data – Online index rebuilds
  • Log Miner introduced – Allows on-line or archived redo logs to be viewed via SQL
  • OPS Cache Fusion introduced avoiding disk I/O during cross-node communication
  • Advanced Queueing improvements (security, performance, OO4O support
  • User Security Improvements – more centralisation, single enterprise user, users/roles across multiple databases.
  • Virtual private database
  • JAVA stored procedures (Oracle Java VM)
  • Oracle iFS
  • Resource Management using priorities – resource classes
  • Hash and Composite partitioned table types
  • SQL*Loader direct load API
  • Copy optimizer statistics across databases to ensure same access paths across different environments.
  • Standby Database – Auto shipping and application of redo logs. Read Only queries on standby database allowed.
  • Enterprise Manager v2 delivered
  • NLS – Euro Symbol supported
  • Analyze tables in parallel
  • Temporary tables supported.
  • Net8 support for SSL, HTTP, HOP protocols
  • Transportable tablespaces between databases
  • Locally managed tablespaces – automatic sizing of extents, elimination of tablespace fragmentation, tablespace information managed in tablespace (i.e moved from data dictionary) improving tablespace reliability
  • Drop Column on table (Finally !!!!!)
  • DBMS_DEBUG PL/SQL package, DBMS_SQL replaced by new EXECUTE IMMEDIATE statement
  • Progress Monitor to track long running DML, DDL
  • Functional Indexes – NLS, case insensitive, descending

Oracle 8.0 – June 1997

  • Object Relational database
  • Object Types (not just date, character, number as in v7
  • SQL3 standard
  • Call external procedures
  • LOB >1 per table
  • Partitioned Tables and Indexes
  • export/import individual partitions
  • partitions in multiple tablespaces
  • Online/offline, backup/recover individual partitions
  • merge/balance partitions
  • Advanced Queuing for message handling
  • Many performance improvements to SQL/PLSQL/OCI making more efficient use of CPU/Memory. V7 limits extended (e.g. 1000 columns/table, 4000 bytes VARCHAR2)
  • Parallel DML statements
  • Connection Pooling ( uses the physical connection for idle users and transparently re-establishes the connection when needed) to support more concurrent users.
  • Improved “STAR” Query optimizer
  • Integrated Distributed Lock Manager in Oracle PS (as opposed to Operating system DLM in v7).
  • Performance improvements in OPS – global V$ views introduced across all instances, transparent failover to a new node
  • Data Cartridges introduced on database (e.g. image, video, context, time, spatial)
  • Backup/Recovery improvements – Tablespace point in time recovery, incremental backups, parallel backup/recovery. Recovery manager introduced
  • Security Server introduced for central user administration. User password expiry, password profiles, allow custom password scheme. Privileged database links (no need for password to be stored)
  • Fast Refresh for complex snapshots, parallel replication, PL/SQL replication code moved in to Oracle kernel. Replication manager introduced.
  • Index Organized tables
  • Deferred integrity constraint checking (deferred until end of transaction instead of end of statement).
  • SQL*Net replaced by Net8
  • Reverse Key indexes
  • Any VIEW updateable
  • New ROWID format

Oracle 7.3

  • Partitioned Views
  • Bitmapped Indexes
  • Asynchronous read ahead for table scans
  • Standby Database
  • Deferred transaction recovery on instance startup
  • Updatable Join Views (with restrictions)
  • SQLDBA no longer shipped.
  • Index rebuilds
  • db_verify introduced
  • Context Option
  • Spatial Data Option
  • Tablespaces changes – Coalesce, Temporary Permanent,
  • Trigger compilation, debug
  • Unlimited extents on STORAGE clause.
  • Some init.ora parameters modifiable – TIMED_STATISTICS
  • HASH Joins, Antijoins
  • Histograms
  • Dependencies
  • Oracle Trace
  • Advanced Replication Object Groups
  • PL/SQL – UTL_FILE

Oracle 7.2

 Resizable, autoextend data files

  • Shrink Rollback Segments manually
  • Create table, index UNRECOVERABLE
  • Subquery in FROM clause
  • PL/SQL wrapper
  • PL/SQL Cursor variables
  • Checksums – DB_BLOCK_CHECKSUM, LOG_BLOCK_CHECKSUM
  • Parallel create table
  • Job Queues – DBMS_JOB
  • DBMS_SPACE
  • DBMS Application Info
  • Sorting Improvements – SORT_DIRECT_WRITES

Oracle 7.1

 ANSI/ISO SQL92 Entry Level

  • Advanced Replication – Symmetric Data replication
  • Snapshot Refresh Groups
  • Parallel Recovery
  • Dynamic SQL – DBMS_SQL
  • Parallel Query Options – query, index creation, data loading
  • Server Manager introduced
  • Read Only tablespaces

Oracle 7.0 – June 1992 

  • Database Integrity Constraints (primary, foreign keys, check constraints, default values)
  • Stored procedures and functions, procedure packages
  • Database Triggers
  • View compilation
  • User defined SQL functions
  • Role based security
  • Multiple Redo members – mirrored online redo log files
  • Resource Limits – Profiles
  • Much enhanced Auditing
  • Enhanced Distributed database functionality – INSERTS, UPDATES,DELETES, 2PC
  • Incomplete database recovery (e.g SCN)
  • Cost based optimiser
  • TRUNCATE tables
  • Datatype changes (i.e VARCHAR2 CHAR, VARCHAR)
  • SQL*Net v2, MTS
  • Checkpoint process
  • ·         Data replication – Snapshots

Oracle 6.2 

  • Oracle Parallel Server

Oracle 6 – July 1988 

  • Row-level locking
  • On-line database backups
  • PL/SQL in the database

Oracle 5.1 

  • Distributed queries

Oracle 5.0 – 1986 

  • Supporting for the Client-Server model – PC’s can access the DB on remote host

Oracle 4 – 1984 

  • Read consistency

Oracle 3 – 1981 

  • Atomic execution of SQL statements and transactions (COMMIT and ROLLBACK of transactions)
  • Nonblocking queries (no more read locks)
  • Re-written in the C Programming Language

Oracle 2 – 1979

 First public release

Basic SQL functionality, queries

Here I am giving to give a quick checklist for a production dba. I have not included any script because each dba have its own scripts. Please have a look on into daily, weekly and quarterly checks .

 Daily Checks

  • Check the availability of the database and instance , every 15 mts
  • Check the availability of the listener, every 15 mts
  • Check the sync between the primary database and standby database , every 15 mts  or based on the SLA(Service level Agreement)
  • Check the space usage and make sure that all the tablespace  usage is below critical level, once in a day
  • Check the space usage of the archive log file system for both primary and standby
  • Verify the success of daily backups, once in a day
  • Verify the success of archive log backups , based on the backup interval to the  backup media
  • Check the system performance , periodic basis
  • Check the database performance , periodic  basis
  • Clear the tickets assigned in the ticketing mechanism
  • Check for the invalid objects
  • Go through the audit files for any suspicious activities
  • Go through the alert logs for any critical ora  errors , once in an hour
  • Verify all the monitoring agent, including OEM agent and third party monitoring agents , once in an hour
  • Archive the alert logs , if required
  • Clear the trace  files in the udump and bdump directory as per the policy
  • Verify the status of daily scheduled jobs

Weekly Checks

  • Check the database statistics collection. On some databases this needs to be done every day depending upon the requirement
  • Approve or plan any scheduled changes for the week
  • Check for critical and patch updates  from oracle
  • Verify the cron jobs scheduled and clear the output directory if required
  • Perform logical level backups of important tables
  • Perform level 0 or cold backup , this can be changed as per the backup policy

Quarterly Checks

  • Checks for the critical patch updates from Oracle,make sure that your systems are in compliance with CPU patches
  • Verify the accuracy for the backs by creating test databases from the backup
  • Verify the accuracy of the DR mechanism by peforming  a database switch over test. This can be done once in six months based on the business requirements

As an Oracle DBA, we must  understand the concepts of Oracle architecture . Here with this artical i am going to share my knowledge about it. Hope it will be useful.

There are two main thing to clearly understand first one is Instance and second is database.

1. INSTANCE:

Instance is nothing but Allocation of the memory to SGA (System Global Area) and starting the mandatory  background process.

System Global Area: SGA is the primary memory structures and it is combination of :–DB Buffer Cache, Redo Log Buffer, Large Pool, Java Pool and Shared Pool (share pool consist of Library Cache and Data Dictionary Cache).

Buffer Cache:- Buffer cache is used to stores the copies of data block that retrieved from datafiles. That is, when user retrieves data from database, the data will be stored in buffer cache. Its size can be manipulated via DB_CACHE_SIZE parameter in init.ora initialization parameter file.
Shared Pool:- Shared pool is broken into two small part memories –– Library Cache and Dictionary Cache. The library cache is used to stores information about the commonly used SQL and PL/SQL statements; and is managed by a Least Recently Used (LRU) algorithm. It is also enables the sharing those statemens among users. In the other hand, dictionary cache is used to stores information about object definitions in the database, such as columns, tables, indexes, users, privileges, etc.
The shared pool size can be set via SHARED_POOL_SIZE parameter in init.ora initialization parameter file.
Redo Log Buffer:- Each DML statement (insert, update, and delete) executed by users will generates the redo entry. What is a redo entry? It is an information about all data changes made by users. That redo entry is stored in redo log buffer before it is written into the redo log files. To manipulate the size of redo log buffer, you can use the LOG_BUFFER parameter in init.ora initialization parameter file.
Large Pool:- Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the shared pool. It is also used for I/O processes. The large pool size can be set by LARGE_POOL_SIZE parameter in init.ora initialization parameter file.
Java Pool:- As its name, Java pool is used to services parsing of the Java commands. Its size can be set by JAVA_POOL_SIZE parameter in init.ora initialization parameter file.

Program Global Area:-Although the result of SQL statemen parsing is stored in library cache, but the value of binding variable will be stored in PGA. Why? Because it must be private or not be shared among users. The PGA is also used for sort area.

Software Area Code:-Software area code is a location in memory where the Oracle application software resides.

Oracle Background Processes: Oracle background processes are  behind the scene that work together with the memories and they are PMON,SMON,DBWn,LGWR,CKPT etc.

DBWn:- Database writer (DBWn) process is used to write data from buffer cache into the datafiles. Historically, the database writer is named DBWR. But since some of Oracle version allows us to have more than one database writer, the name is changed to DBWn, where n value is a number 0 to 9.
LGWR:-Log writer (LGWR) process is similar to DBWn. It writes the redo entries from redo log buffer into the redo log files.
CKPT:- Checkpoint (CKPT) is a process to give a signal to DBWn to writes data in the buffer cache into datafiles. It will also updates datafiles and control files header when log file switch occurs.
SMON:- System Monitor (SMON) process is used to recover the system crach or instance failure by applying the entries in the redo log files to the datafiles.
PMON:- Process Monitor (PMON) process is used to clean up work after failed processes by rolling back the transactions and releasing other resources

2. DATABASE:

Database refers to the disk resources and it is divided into two main structures (a) Logical structures and (b) Physical structures.

Logical Structures: Oracle database is divided into smaller logical units to manage, store, and retrieve data efficiently. The logical units are tablespace, segment, extent, and data block.

Tablespace: A Tablespace is a grouping logical database objects. A database must have one or more tablespaces. Ex: SYSTEM tablespace, SYSAUX Tablespace,  USER Tablespace. Tablespace is composed by one or more datafiles.
Segment:-A Tablespace is further broken into segments. A segment is used to stores same type of objects. That is, every table in the database will store into a specific segment (named Data Segment) and every index in the database will also store in its own segment (named Index Segment). The other segment types are Temporary Segment and Rollback Segment.
Extent:-A segment is further broken into extents. An extent consists of one or more data block. When the database object is enlarged, an extent will be allocated. Unlike a tablespace or a segment, an extent cannot be named.
Data Block:- A data block is the smallest unit of storage in the Oracle database. The data block size is a specific number of bytes within tablespace and it has the same number of bytes.

Physical Structures:-The physical structures are structures of an Oracle database (in this case the disk files) that are not directly manipulated by users. The physical structure consists of datafiles, redo log files, and control files.

Datafiles:-A datafile is a file that correspondens with a tablespace. One datafile can be used by one tablespace, but one tablespace can has more than one datafiles.
Redo Log Files:- Redo log files are the files that store the redo entries generated by DML statements. It can be used for recovery processes.
Control Files:-Control files are used to store information about physical structure of database, such as datafiles size and location, redo log files location, etc.