Pages

Saturday, October 22, 2011

RAC SCAN Listener

SCAN (Single Client Access Name) is a new RAC 11gR2 feature so that the clients can connect Oracle databases in a cluster using a single name. The SCAN name must be resolvable without domain suffix to 3 IPs using a round-robin algorithm. The IP addresses must be on the same subnet as your public network in the cluster. Each time, the nslookup would return a set of 3 IPs in a different order. The client will try connecting one of the IPs received. If the client receives an error, it will try other IPs before returning an error. When a SCAN listener receives a connection request, it will redirect it to the local listener on the least loaded node. Finally the client establishes the connection to the database instance through the local listener on the node.

The REMOTE_LISTENER parameter should be set to the SCAN so that the instances can register to SCAN listeners to provide information about the current load, the recommendation on how many connections should be directed to the instance, and the services that are provided by the instance. The LOCAL_LISTENER parameter should be set to node-vip.

remote_listener -> scan-name.example.com:1521
local_listener -> (ADDRESS = (PROTOCOL=TCP) (HOST=node-vip.example.com) (PORT=1521))
service_names -> RACservice

Oracle 11gR2 client will use this TNS entry.

RACservice =
  (DESCRIPTION =
     (ADDRESS = (PROTOCOL = TCP) (HOST= scan-name.example.com) (PORT = 1521))
   (CONNECT_DATA = 
      (SERVER = DEDICATED)
      (SERVICE_NAME = RACservice)
   )
) 

To display configuration information for all SCAN listeners.
$ srvctl config scan_listener
To display configuration information for all SCAN VIPs.
$ srvctl config scan
To modify SCAN listener endpoints. Stop and Start SCAN listener and local listener on that node to take the new port.
$ srvctl modify scan_listener -p TCP:<new_port>
To Stop the SCAN listener.
$ srvctl stop scan_listener -i <ordinal_number>
To Start the SCAN listener.
$ srvctl start scan_listener -i <ordinal_number>
To display detailed configuration information for local listeners.
$ srvctl config listener -a
To modify port for local listener. Stop and start the local listener to take the new port.
$ srvctl modify listener -l listener -p TCP:<new_port>
To stop local listener
$ srvctl stop listener -n <node> -l listener
To start local listener.
$ srvctl start listener -n <node> -l listener
To remove local listener
$ srvctl remove listener -n <node> -l listener

Saturday, September 10, 2011

Oracle Cluster Registry and Voting Disks

OCR and voting disks are shared files on a cluster filesystem. OCR contains the configuration information about the cluster resources. Voting Disks are used to monitor cluster node status. 

To display OCR locations
$ ocrcheck 
To replace the current OCR location
# ocrconfig -replace <current_location> -replacement <new_location>
To add new OCR location. Create empty file first with touch command.
# ocrconfig -add <new_location>
To delete OCR location. If only one OCR location configured and online, it cannot be deleted.
# ocrconfig -delete <current_location>

To retrieve the list of voting files.
$ crsctl query css votedisk 
To add a voting disk
$ crsctl add css votedisk <new_location>
To delete a voting disk. If only one vote disk configured and online, it cannot be deleted.
$ crsctl delete votedisk <Universal_File_Id>
If all voting disks are lost, start css in exclusive mode and replace voting disk.
# crsctl start crs -excl
# crsctl replace votedisk <location>

In Oracle Clusterware 11gR2, the voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added. The CRSD process automatically creates the OCR backup every four hours in $GRID_HOME/cdata/<cluster_name>.

To list the backup files
$ ocrconfig -showbackup
To restore OCR backup. Stop clusterware on all nodes. Create empty OCR file with same name if the original file doesn't exist.
# ocrconfig -restore <backup_location>

To verify Clusterware version
$ crsctl query crs activeversion
To verify clusterware running on the node
$ crsctl check crs