RAC Administration
Cluster Ready Services Administration
• to know the crs services are running use
o #crsctl check crs

• voting disk: it is file that manage the information about node membership

• To retrieve the current voting disk use
o #crsctl query votedisk css

• To Backup voting disk use
o #dd if=voting_disk_name of=backup_file_name

• Recover voting disk by
o #dd if=backupvotingdiskfile of=votingdiskname

• To Multipul Voting disk
o #crsctl add css votedisk path

• to delete voting disk
o #crsctl delete css votedisk path

• OCR: it is file that manage the cluster and RAC database configuration information. it uses ocrconfig, ocrcheck, ocrdump for configuration of OCR. Oracle cluster ware don’t support more than 2 ocr files

• to add ocr location use
o #ocrconfig -replace ocr destination_file or disk_name

• to add ocr mirror location
o #ocrconfig -replace ocrmirror destination_file or disk

• to knwo the online ocr use
o #ocrcheck

• Node to rejoin the cluster after it is restarted use
o #ocrconfig –repair

• to repair ocr if oracle clusterware stopped use
o #ocrconfig -repair ocrmirror device_name

• To remove ocr (alteast one ocr file should be online)
o #ocrconfig -replace ocr

• to remove the mirror ocr
o #ocrconfig -replace ocrmirror
• If node cannot startup and alert log contain CLSD-1009 and CLSD-1011 messages then overwrite the OCR by ocrconfig -overwrite.
• Oracle Clusterware automatically backup ocr every four hours to do manual backup use
o #ocrconfig –manualbackup

• To see the backup use
o #ocrconfig –showbackup

• to view the contents of backed up file use
o #ocrdump -backupfile file_name

• to restore ocr file stop the crs then use
o #ocrconfig -restore backeup_file_name

• to export ocr use
o #ocrconfig -export file_name

• To check the ocr integrity use
o #cluvfy comp ocr -n rac1,rac2 -verbose or ocrcheck

• OCRCHECK AND OCRDUMP CANBE USE to diagnose ocr problem
• to upgrade to downgrade OCR use
o #ocrconfig -upgrade/downgrade

Cluster verifying utility #cluvfy
• It is use to verify the primary cluster components
• to know the list of node
o #cluvfy -n all

• to know the node configured in oracle clusterware use olnnodes parameter
• to verify the system requirement on the nodes before installing OCW
o #cluvfy comp sys [ -n node_list] -p {crs|database} –verbose

• to verfigy the sys req for installing OCW use
o #cluvfy comp sys -n node1,node2 -p crs verbose

• to verify storage use
o #cluvfy comp ssa -n node1,node2 -s storageID_list -verbose
• to discover all of the shared storage system available on sys use
o #cluvfy comp ssa -n all -verbose
o #cluvfy comp ssa -n all -s /dev/sdb -verbose

• to verify the connectivity use
o #cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]
• to verify the connectivity bet the cluster node through all of the available network interface use
o #cluvfy comp nodecon -n node_list [ -i interface_list] [-verbose] OR
o #cluvfy comp nodecon -n all -verbose

• to verify user permission and admin privillages use
o #cluvfy comp admprv [-n node_list] [-verbose] | -o user_equiv/crs_inst/db_inst/db_config -d oracle_home

• To verify that system meet for OCW installation use
o #cluvfy stage -pre crsinst -n node_list -verbose

• To know the OCW functioning properly use
o #cluvfy stage -post crsinst -n node_list -verbose

• to verify that system meet for RAC Installation use
o #cluvfy stage -pre dbinst -n node1,node2 -verbose

• To verify the system meet for database creation or change use
o #cluvfy stage -pre dbcfg -n node1,node2 -d oracle_home -verbose

• To check the integrity of entire cluster use
o #cluvfy comp clu

• to verify all of the clusterware component use
o #cluvfy comp crs -n node_list -verbose

• To verify the individual componanet of cluster manager subcomponanet use
o #cluvfy comp clumgr -n node_list -verbose

• to verify the intergrity of OCR use
o #cluvfy comp ocr
******************************************************************************************************************************************

We can user srvctl to start,stop,status,add,remove,enable,disable an ASM instance.
To add ASM: #srvctl add asm -n node_name -i asm_instance_name -o oracle_home.
To remove the ASM: #srvctl remove asm -n node_name -i asm_instance_name
To enable/disable ASM: #srvctl enable/disable asm -n node_name -i asm_inst_name
To start/stop ASM instance:
#srvctl start/stop asm -n node_name [-i asm_instance_name][-o start_options] [-c connect_str]
To know the status of the ASM: #srvctl status asm -n node_name)
To start or stop DB instance:
#srvctl start/stop instance -d db_name -i instance_name_list [-0 stop_options] [-o connect-str]
To start or stop entire cluster database:
#srvctl stop/start database -d db_name [-o start_options] [-c connect_str]
By default oracle clusterware control database auto start in oracle RAC environments when a system reboot and this can be control by policy, by default there are two policies one is automatic(default) and other is manual.
To know the current policy
#srvctl config database -d db_name -a
#srvctl add/modify database -d db_name -y policy_name
The parameter which should have identical values on all instances are “active_instance_count, archive_lag_target, cluster_database, cluster_database_instance, control-files, DB_domain, db_name, undo_management.
*Oracle used instance_number parameter to distinguish among instances at startup.
**Workload management enable to manange workload distribution to provide High availability and scalibity by services (it can be preferred or available),

1. Connection load balancing: Client side LB, balances the connection request across the listener. Server side LB, listener directs a connect request to the best instance currently providing the service by using LB Advisory. When you create RAC DB using DBCA by default it configures and enable SSLB.
If we configure Transparent Application Failover (TAF) for connection, then oracle moves the session to surviving instance.
TAF can restart a query after failover has completed but for other type of transaction (INSERT, UPDATE,DELETE),we must rollback the failed transaction and resubmit the transaction.
Services simply the deployment of TAF and doesn’t required any changes at client side changes and TAF setting on a service overrides any TAF setting in the client connection definition.
To define a TAF policy.
EXECUTE DBMS_SERVICE.MODIFY_SERVICE (service_name => ‘ksa’
,ag_ha_notifications => true
,failover_method => BASIC
,FAILOVER_TYPE => SELECT
,FAILOVER_RETRIES => 180
,FAILOVER_DELAY => 5
,CLB_GOAL => DBMS_SERVICES.CLB_GOAL_LONG);
Fast Application Notification (FAN): it notify to other processes about configuration and service level status (UP OR DOWN) and takes immediate action. FAN can be used without programmatically changes if you are using integrated Oracle Client ie Oracle db 10g JDBC and ODP.NET OR OCI.
FAN events are published using ONS and oracle streams Advance Queuing.
Oracle RAC HA framework maintains service availability by using OC and resource profiles. Oracle HA framework monitor the database and its services and sends event notification using FAN.
The load balancing advisory provides advice about how to direct incoming work to the instance that provides the best service for that work. Load balancing advisory can be use by defining service level goal for each service for which we want to enable load balancing. There are two type of service level goals.
1. SERVICE TIME: Attempt to direct work request to instance according to response time
2. THROUGHPUT: Attempt to direct work request according to throughput.
EXECUTE DBMS_SERVICE.MODIFY_SERVICE (SERVICE_NAME => ‘OE’
,GOAL => DMBS_SERVICE.GOAL_SERVICE_TIME/Goal_THROUGHPUT,
CLB_GOAL => DMBS_SERVICE.CLB_GOAL_SHORT);

We can retrieve the goal setting for services by using views DBA_SERVICES, V$SERVICES, V$ACTIVE_SERVICES.
To add middle tier node or to update the oracle RACnode, use racgons.
#racgons add_config hostname:port .
Enable distributed transaction processing for services: For services that are going to use for distributed transaction processing, create service using Enterprise Manager, DBCA or SRVCTL and defined only one instance as the preferred instance.
#srvctl add service -d db_name -s serive_name -r RAC01 -a RAC02.
Then mark the service for DTP as
EXECUTE DBMS_SERVICES.MODIFY_SERVICES(SERVICE_NAME => ‘US.ORACLE.COM’, DTP=>TRUE);
SERVICES CAN BE ADMINISTRATED WITH ENTERPISE MANAGER,DBCA,PL/SQL AND SRVCTL
You can create, modify, delete, start and stop the services.
#srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P TAF_policy]

#srvctl start/stop service -d db_unique_name [-s service_name_list][-i inst_name][-o start_options][-c connect_str]
#srvctl enable service -d db_unique_name -s service_name_list -i inst_name

To relocate service from instance1 to instance2 use:
srvctl relocate service -d db_name -s service_name -i instance1 -t instance2
To obtain the status or configuration details of service use
srvctl status service -d db_name -s service_name –a
srvctl status service -d db_name -s RAC –a
srvctl status service -d db_name -s RACXDB –a
srvctl status service -d db_name -s SYS$BACKGROUD –a
srvctl status service -d db_name -s SYS$USERS –a