Pages

Saturday, June 22, 2013

Migrating the database to ASM Disk Group

If Block Change Tracking is configured, disable it.

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

Set CONTROL_FILES to the ASM locations and start the instance.

SQL> ALTER SYSTEM SET CONTROL_FILES='+DATADG','+FRADG' SCOPE=spfile;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP NOMOUNT

Restore controlfile into the new ASM location.
RMAN> CONNECT TARGET
RMAN> RESTORE CONTROLFILE FROM '<old_controlfile_path>';

Backup the database into the ASM disk group.
RMAN> ALTER DABASE MOUNT;
RMAN> BACKUP AS COPY DATABASE FORMAT '+DATADG';

Switch all datafiles and tempfiles to the new ASM location.

RMAN> SWITCH DATABASE TO COPY;

Backup the read only tablespaces into the ASM disk group.
SQL> ALTER TABLESPACE <tablespace_name> OFFLINE;
RMAN> BACKUP AS COPY TABLESPACE <tablespace_name> FORMAT '+DATADG';
RMAN> SWITCH TABLESPACE <tablespace_name> TO COPY;
SQL> ALTER TABLESPACE <tablespace_name> ONLINE;

RMAN> RUN {
          SET NEWNAME FOR TEMPFILE 1 TO '+DATADG';
          ...
          SWITCH TEMPFILE ALL;
          }


Open the database.

RMAN> ALTER DATABASE OPEN;

Drop online redo log groups and re-create them in ASM disk group.

SQL> select group#, status from v$log;
    GROUP# STATUS
---------- ----------------
         1 CURRENT
         2 INACTIVE
         3 INACTIVE

SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
SQL> ALTER DATABASE ADD LOGFILE GROUP 3 ('+DATADG','+FRADG') SIZE 500M;
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
SQL> ALTER SYSTEM CHECKPOINT;

Repeat the above step for other logfile groups.

Set archive log destination to the ASM location.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1 = 'location=+FRADG' SCOPE=BOTH;

Relocate spfile to an ASM disk group.

SQL> CREATE PFILE='$ORACLE_HOME/dbs/initORCL.ora' FROM spfile;
SQL> CREATE SPFILE='+DATADG/ORCL/spfileORCL.ora' FROM PFILE='$ORACLE_HOME/dbs/initORCL.ora';

Shutdown database.
SQL> SHUTDOWN IMMEDIATE

Set ASM environment and modify the oracle restart configuration for spfile.
$ srvctl modify database -d ORCL -p '+DATADG/ORCL/spfileORCL.ora'

Saturday, June 15, 2013

Maximum Availability Architecture

PREREQUISITES :

1. The primary RAC database in archivelog mode.
2. Grid Infrastructure and Oracle software were installed on standby node.
3. spfile on shared filesystem.
4. Create dump directories adump etc on standby node if needed.

1. Place the primary database in force logging.
SQL> alter database force logging;

PREPARE PRIMARY DATABASE


Configure the primary database initialization parameters to support both the primary and standby roles.

SQL> alter system set log_archive_config='dg_config=(ORCL,ORCLS)' scope=both ;
SQL> alter system set log_archive_dest_1='LOCATION=/u05/ORCL/arch MANDATORY REOPEN=30 VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=ORCL' scope=both;
SQL> alter system set log_archive_dest_2='SERVICE=ORCLS LGWR ASYNC reopen=15 max_failure=20 net_timeout=15 VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLS' scope=both;

SQL> alter system set log_archive_dest_state_1='enable' scope=both;
SQL> alter system set log_archive_dest_state_2=defer scope=both;
SQL> alter system set log_archive_max_processes=10 scope=both;

SQL> alter system set fal_server='ORCLS' scope=both;
SQL> alter system set fal_client='ORCL1' scope=both sid='ORCL1'; --- Deprecated in 11gR2
SQL> alter system set fal_client='ORCL2' scope=both sid='ORCL2'; --- Deprecated in 11gR2

SQL> alter system set db_file_name_convert='/u3/baeifspdat','/baeifst_dbf/oradata/baeifst' scope=spfile;
SQL> alter system set log_file_name_convert='/u1/baeifst/redo',/baeifst_log1/oradata/baeifst','/u2/baeifst/redo','/baeifst_log2/oradata/baeifst' scope=spfile;
SQL> alter system set standby_file_management='AUTO' scope=both;

SQL> alter system set db_unique_name='ORCL';

SQL> alter database set standby database to maximize availability;



3. Create pfile on primary and copy password file and pfile to secondary.

SQL> create pfile from spfile;


PREPARE STANDBY DATABASE

Modify db_unique_name,fal_server, Log_archive dest, audit_file_dest,control_files,service_names etc.

set db_file_name_convert='/baeifst_dbf/oradata/baeifst','/u3/baeifspdat' scope=spfile;

set log_file_name_convert='/baeifst_log1/oradata/baeifst','/u1/baeifst/redo','/baeifst_log2/oradata/baeifst','/u2/baeifst/redo' scope=spfile;


SQL> startup nomount pfile=

SQL> create spfile from pfile;

SQL> shutdown immediate

CREATE STANDBY DATABASE

Copy datafiles in hot backup mode and stand by controlfile to standby.


ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/control_stby.ctl';


6. Add databse resouce to cluster.
$ srvctl add database –d -o -p ;
$ srvctl add instance –d –i –n ;
$ srvctl add instance –d –i –n ;

7. Mount standby database.
$ srvctl start -d -o mount

Create the standby redo logs to support the standby role on both primary and standby. The recommended number of standby redo logs is one more than the number of online redo logs for each thread.
SQL> alter database add standby logfile thread 1 group 11 ('redo11a','redo11b') size 512M; --> same size of online redo log. One extra group upto group 14.
SQL> alter database add standby logfile thread 2 group 15 ('redo15a','redo15b') size 512M; --> One extra group upto group 18.



8. Start managed recovery on standby database.
SQL> recover managed standby database disconnect from session;

9. Enable log shipping on primary.
alter system set log_archive_dest_state_2=enable scope=both;



-----------------------------------------------------------------

Check the number and group numbers of the redo logs.
SQL> select * from v$log;

Check the standby redo logs.
SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG;

Verify Destinations are valid on both.

SQL> select DEST_ID, STATUS, DESTINATION, ERROR from V$ARCHIVE_DEST where DEST_ID <= 2;



Verify redo apply on primary.

SQL> select DEST_ID, STATUS, DESTINATION, ERROR from V$ARCHIVE_DEST where DEST_ID <= 2;



Verify gaps in redo on primary.

SQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;



Verify Error on both.

SQL> select STATUS, GAP_STATUS from V$ARCHIVE_DEST_STATUS where DEST_ID = 2;





select thread#,sequence#,archived,applied from v$archived_log order by 1,2;

SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;

SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ# FROM V$ARCHIVE_DEST_STATUS;