Pages

Saturday, October 22, 2011

RAC SCAN Listener

SCAN (Single Client Access Name) is a new RAC 11gR2 feature so that the clients can connect Oracle databases in a cluster using a single name. The SCAN name must be resolvable without domain suffix to 3 IPs using a round-robin algorithm. The IP addresses must be on the same subnet as your public network in the cluster. Each time, the nslookup would return a set of 3 IPs in a different order. The client will try connecting one of the IPs received. If the client receives an error, it will try other IPs before returning an error. When a SCAN listener receives a connection request, it will redirect it to the local listener on the least loaded node. Finally the client establishes the connection to the database instance through the local listener on the node.

The REMOTE_LISTENER parameter should be set to the SCAN so that the instances can register to SCAN listeners to provide information about the current load, the recommendation on how many connections should be directed to the instance, and the services that are provided by the instance. The LOCAL_LISTENER parameter should be set to node-vip.

remote_listener -> scan-name.example.com:1521
local_listener -> (ADDRESS = (PROTOCOL=TCP) (HOST=node-vip.example.com) (PORT=1521))
service_names -> RACservice

Oracle 11gR2 client will use this TNS entry.

RACservice =
  (DESCRIPTION =
     (ADDRESS = (PROTOCOL = TCP) (HOST= scan-name.example.com) (PORT = 1521))
   (CONNECT_DATA = 
      (SERVER = DEDICATED)
      (SERVICE_NAME = RACservice)
   )
) 

To display configuration information for all SCAN listeners.
$ srvctl config scan_listener
To display configuration information for all SCAN VIPs.
$ srvctl config scan
To modify SCAN listener endpoints. Stop and Start SCAN listener and local listener on that node to take the new port.
$ srvctl modify scan_listener -p TCP:<new_port>
To Stop the SCAN listener.
$ srvctl stop scan_listener -i <ordinal_number>
To Start the SCAN listener.
$ srvctl start scan_listener -i <ordinal_number>
To display detailed configuration information for local listeners.
$ srvctl config listener -a
To modify port for local listener. Stop and start the local listener to take the new port.
$ srvctl modify listener -l listener -p TCP:<new_port>
To stop local listener
$ srvctl stop listener -n <node> -l listener
To start local listener.
$ srvctl start listener -n <node> -l listener
To remove local listener
$ srvctl remove listener -n <node> -l listener

Saturday, September 10, 2011

Oracle Cluster Registry and Voting Disks

OCR and voting disks are shared files on a cluster filesystem. OCR contains the configuration information about the cluster resources. Voting Disks are used to monitor cluster node status. 

To display OCR locations
$ ocrcheck 
To replace the current OCR location
# ocrconfig -replace <current_location> -replacement <new_location>
To add new OCR location. Create empty file first with touch command.
# ocrconfig -add <new_location>
To delete OCR location. If only one OCR location configured and online, it cannot be deleted.
# ocrconfig -delete <current_location>

To retrieve the list of voting files.
$ crsctl query css votedisk 
To add a voting disk
$ crsctl add css votedisk <new_location>
To delete a voting disk. If only one vote disk configured and online, it cannot be deleted.
$ crsctl delete votedisk <Universal_File_Id>
If all voting disks are lost, start css in exclusive mode and replace voting disk.
# crsctl start crs -excl
# crsctl replace votedisk <location>

In Oracle Clusterware 11gR2, the voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added. The CRSD process automatically creates the OCR backup every four hours in $GRID_HOME/cdata/<cluster_name>.

To list the backup files
$ ocrconfig -showbackup
To restore OCR backup. Stop clusterware on all nodes. Create empty OCR file with same name if the original file doesn't exist.
# ocrconfig -restore <backup_location>

To verify Clusterware version
$ crsctl query crs activeversion
To verify clusterware running on the node
$ crsctl check crs




Saturday, August 27, 2011

11g Snapshot Standby Database


A snapshot standby database is a temporarily opened real time production database for testing or reporting. It is a fully updateable standby database after conversion from physical standby. It receives redo data from primary, but does not apply. The redo data is applied after conversion back to physical standby to sync up with primary. It implicitly creates a guaranteed restore point during the conversion to snapshot standby and removes this guaranteed restore point during the conversion back to physical standby.

CONVERT TO SNAPSHOT STANDBY
1. Stop Redo Apply
SQL> recover managed standby database cancel;
2. Ensure the database is mounted, but not open.
3. Convert to Snapshot Standby. This creates the GRP and leaves the database in nomount mode.
SQL> alter database convert to snapshot standby.
4. Restart the database.
SQL> shutdown immediate
SQL> startup

CONVERT BACK TO PHYSICAL STANDBY
1.  Ensure the database is mounted, but not open.
SQL> shutdown immediate
SQL> startup mount
2. Convert to Physical Standby. This discards the changes made to the database, flashback the database to GRP, deletes the flashback logs in recovery area, drops GRP and leaves the database in nomount mode.
SQL> alter database convert to physical standby;
3. Restart the database in mount mode.
SQL> shutdown immediate
SQL> startup mount
4. Start Redo Apply.
SQL> recover managed standby database disconnect from session;


Saturday, June 25, 2011

Data Guard - Failed Primary

Prior to Oracle 10g, after a failover occurs, it was impossible to bring the original primary database back into Data Guard configuration as a new physical standby without restoring it. To avoid this restore, the flashback logging must have been enabled or the guaranteed restore point must have been created in the original primary database to allow flashback database to a state prior to its failure. 

Setup a job to create GRP daily and drop GRP of the previous day. See my blog at http://anandbitra.blogspot.com/2011/05/guaranteed-restore-point.html for more info on GRP.

FAILOVER TO PHYSICAL STANDBY
1. Identify any gaps on standby.
SQL> select thread#, low_sequence#, high_sequence# from v$archive_gap;
2. If primary host is accessible, copy missing logs and register on standby.
SQL> alter database register physical logfile 'path';
3. Initiate failover on standby. Force terminates active RFS process.
SQL> recover managed standby database finish force;
4. Convert the standby to primary.
SQL> alter database commit to switchover to primary;
5. Open this new primary database.
SQL> shutdown immediate
SQL> startup
6. Stop redo transport to failed primary.
SQL> alter system set log_archive_dest_state_2='defer';

CONVERT FAILED PRIMARY TO PHYSICAL STANDBY
1. Determine SCN at which old standby became primary.
SQL> select to_char(standby_became_primary_scn) from v$database;
2. On failed primary, flashback to GRP.
SQL> startup mount
SQL> flashback database to restore point <before_failure>;
3. Recover database until SCN.
SQL> recover database until change <standby_became_primary_scn>;
4. Convert to physical standby.
SQL> alter database convert to physical standby;
5. Mount the database.
SQL> shutdown immediate
SQL> startup mount
6. Start redo apply on this new standby. 
SQL> recover managed standby database disconnect from session;
7. Start redo transport on primary.
SQL> alter system set log_archive_dest_state_2='enable'; 

SWITCHOVER TO ORIGINAL PRIMARY (CURRENTLY PHYSICAL STANDBY)
1.  On primary, switchover to physical standby.
SQL> alter database commit to switchover to physical standby with session shutdown;
2. Mount the database.
SQL> shutdown immediate
SQL> startup mount
3. Start Redo apply.
SQL> recover managed standby database disconnect from session;
4. On standby, switchover to primary.
SQL> alter database commit to switchover to primary;
5. Open the database.
SQL> shutdown immediate
SQL> startup






 




Saturday, May 14, 2011

Guaranteed Restore Point

A Guaranteed Restore Point ensures that the database can be reverted back to its state at the restore point SCN regardless of the DB_FLASHBACK_RETENTION_TARGET initialization parameter setting. As it is always preserved, it must be dropped explicitly. The disk space usage and the performance overhead of logging for GRP is lower than normal flashback database logging because the database stores the before image of the first time modified block in flashback logs. Any subsequent modifications to the same block do not cause the contents to be logged.

Requirements:
1. The database must be running in ARCHIVELOG mode. 
2. The fast recovery area must be configured. See my blog at http://anandbitra.blogspot.com/2011/04/fast-recovery-area.html
3. If the flashback logging is not enabled, the database must be mounted to create the first GRP or if all previously created GRPs have been dropped.

Create GRP
SQL> create restore point before_app_upgrade guarantee flashback database;

Drop GRP
SQL> drop restore point before_app_upgrade;

List Restore Points
SQL> select name,scn,time,database_incarnation#,guarantee_flashback_database,storage_size from v$restore_point;

Considerations:
Dropping a tablespace or shrining a datafile can prevent flashback the affected datafiles to GRP. However, the shrunken file can be taken offline to flashback the rest of the database and then the shrunken datafile can be restored and recovered later.

Saturday, April 16, 2011

Fast Recovery Area

In Oracle 11gR2, the flash recovery area was renamed as fast recovery area. FRA is the unified location for the storage of backup and recovery related files.These files are managed by RMAN and flashback operations to reclaim the space for new files. FRA can contain the following files.
  • Archived redo logs
  • Flashback logs
  • RMAN backups
  • Online redo log
  • Controlfile
 The following parameters can be configured dynamically in order to setup FRA. 
  • DB_RECOVERY_FILE_DEST_SIZE
  • DB_RECOVERY_FILE_DEST
  • DB_FLASHBACK_RETENTION_TARGET  --> default value is 1440 min
Setup FRA
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=2G;
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='/u05/fra';

View FRA space usage
SQL> SELECT  * FROM V$RECOVERY_FILE_DEST;
SQL> SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE;