How To clone Oracle Home.
======================
1.Tar the Oracle binaries in source once done copy it to target.
2.Untar the Binaries in target with datbase user.
3.update the file /etc/oraInst file
cat /etc/oraInst.loc
======================
1.Tar the Oracle binaries in source once done copy it to target.
2.Untar the Binaries in target with datbase user.
3.update the file /etc/oraInst file
cat /etc/oraInst.loc
inventory_loc=/u01/oracle/product/11.2.0/oraInventory
inst_group=oinstall
4.Update Environment Variables
=========================
$cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
export ORACLE_HOME=/u01/oracle/database/11.2.0/test
export ORACLE_SID=TEST
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
export TNS_ADMIN=/u01/oracle/database/11.2.0/test/network/admin
5.Once done source .bash_profile and run clone.pl script
cd $ORACLE_HOME/clone/bin
perl clone.pl ORACLE_HOME="/u01/ora/database/11.2.0/test" ORACLE_BASE="/u01/ora/database/11.2.0/" ORACLE_HOME_NAME="test"
6.Once done it will ask to run root.sh script as root user.
7.connect to sql and check
Issues and workaround
==================
$cat config.c
/* $Header: config_ux.c 115.1 2004/03/27 07:17:12 hfux ship $ */
/* SS_DBA_GRP defines the UNIX group ID for sqldba adminstrative access. */
/* Refer to the Installation and User's Guide for further information. */
#define SS_DBA_GRP "oinstall"
#define SS_OPER_GRP "oinstall"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP};
If its a rac database some additional steps needs to be done.
==========================================
Bash profile
=========
DB1 DB2
export ORACLE_SID=TESTA export ORACLE_SID=TESTB
perform the same steps on both the nodes.
Once done enable rac binaries on both the nodes.
How to turn on rac binaries
======================
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle
To turn off rac binaries
==================
$ make -f ins_rdbms.mk rac_off
$ make -f ins_rdbms.mk ioracle
ISSUE 2
=======
SQL> startup pfile=’/tmp/pfile-rac’
ORA-00439: feature not enabled: Real Application Clusters
ORA-01078: failure in processing system parameters
Because => An Oracle Home disabled RAC.
So, we should relink library to enable RAC on this Oracle Home
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle
and then start again .
=========================
$cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
export ORACLE_HOME=/u01/oracle/database/11.2.0/test
export ORACLE_SID=TEST
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
export TNS_ADMIN=/u01/oracle/database/11.2.0/test/network/admin
5.Once done source .bash_profile and run clone.pl script
cd $ORACLE_HOME/clone/bin
perl clone.pl ORACLE_HOME="/u01/ora/database/11.2.0/test" ORACLE_BASE="/u01/ora/database/11.2.0/" ORACLE_HOME_NAME="test"
6.Once done it will ask to run root.sh script as root user.
7.connect to sql and check
Issues and workaround
==================
Issue-1
[oratest@server02 lib]$ sqlplus
SQL*Plus: Release 11.2.0.4.0 -
Copyright (c) 1982, 2007, Oracle. All Rights
Reserved.
Enter user-name: / as sysdba
ERROR:
ORA-01031: insufficient privileges
Issue is due to the group mismatch in config.c file
Resolution:-
Note:- I changed the dba group to oinstall
group in config.c file.
1. cd $ORACLE_HOME/rdbms/lib
2. mv config.o config.o.bak
3. inspect the config.s file to check if it has the
proper groups defined for 'dba' and 'oper' connections, make the necessary
changes as required
4. if config.s exists and config.c doesn't , then do :
cp config.s config.c
5. make -f ins_rdbms.mk config.o ioracle
$cat config.c
/* $Header: config_ux.c 115.1 2004/03/27 07:17:12 hfux ship $ */
/* SS_DBA_GRP defines the UNIX group ID for sqldba adminstrative access. */
/* Refer to the Installation and User's Guide for further information. */
#define SS_DBA_GRP "oinstall"
#define SS_OPER_GRP "oinstall"
char *ss_dba_grp[] = {SS_DBA_GRP, SS_OPER_GRP};
If its a rac database some additional steps needs to be done.
==========================================
Bash profile
=========
DB1 DB2
export ORACLE_SID=TESTA export ORACLE_SID=TESTB
perform the same steps on both the nodes.
Once done enable rac binaries on both the nodes.
How to turn on rac binaries
======================
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle
To turn off rac binaries
==================
$ make -f ins_rdbms.mk rac_off
$ make -f ins_rdbms.mk ioracle
ISSUE 2
=======
ORA-00439: feature not enabled: Real Application Clusters
ORA-01078: failure in processing system parameters
Because => An Oracle Home disabled RAC.
So, we should relink library to enable RAC on this Oracle Home
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
$ make -f ins_rdbms.mk ioracle
and then start again .
========================================================================
Cold Cloning of Database
========================================================================
Source Database Name: TEST
Clone Database Name: TESTCLON
Source Database physical files path=/u01/TEST/oradata
Cloned Database physical files path=/u02/TESTCLON/oradata
Steps to be followed:
Startup the source database (if not open)
$ export ORACLE_SID=TEST
$ sqlplus / as sysdba
SQL> startup
Find out the path and names of datafiles, control files, and redo log files.
SQL> select name from v$datafile;
SQL> select member from v$logfile;
SQL> select name from v$controlfile;
Take the control file backup.
SQL> alter database backup controlfile to trace;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/home/oratest/testfile.trc';
Parameter file backup.
If ‘TEST’ database is using spfile,
SQL> create pfile=’/u02/TESTCLON/initTESTCLON.ora’ from spfile;
If database is using pfile, use OS command to copy the pfile to a backup location.
Shutdown the ‘TEST’ database
SQL> shutdown
Copy all data files, control files, and redo log files of ‘TEST’ database to a target database location.
$ mkdir /u02/TESTCLON/oradata
$ cp /u01/TEST/oradata/* /u02/TESTCLON/oradata/
Create appropriate directory structure in clone database for dumps and specify them in the parameter file.
$ mkdir -p /u02/TESTCLON/{bdump,udump}
Edit the clone database parameter file and make necessary changes to the clone database
$ cd /u02/TESTCLON/
$ vi initTESTCLON.ora
db_name=TESTCLON
control_files=/u02/TESTCLON/oradata/cntrl01.ctl
background_dump_dest=/u02/TESTCLON/bdump
user_dump_dest=/u02/TESTCLON/udump
*._no_recovery_through_resetlogs=TRUE
. . .
. . .
:wq!
Startup the clone database in NOMOUNT stage.
$ export ORACLE_SID=TESTCLON
SQL> startup nomount pfile=’/u02/TESTCLON/initTESTCLON.ora’
edit the control file as per requirement
=============================
CREATE CONTROLFILE SET DATABASE “TESTCLON” RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘/u02/TESTCLON/oradata/redo01.log’ SIZE 5M,
GROUP 2 ‘/u02/TESTCLON/oradata/redo02.log’ SIZE 5M,
DATAFILE
‘/u02/TESTCLON/oradata/system01.dbf’,
‘/u02/TESTCLON/oradata/undotbs01.dbf’,
‘/u02/TESTCLON/oradata/sysaux01.dbf’,
‘/u02/TESTCLON/oradata/users01.dbf’,
‘/u02/TESTCLON/oradata/example01.dbf’
CHARACTER SET AL32UTF8
;
Create the control file by running from the trace path
SQL> @u01/TEST/source/udump/cntrl.sql
Once the control file’s successfully created, open the database with resetlogs option.
SQL> alter database open resetlogs;
Add temporary files and post clone steps if required.
========================================================================
References
=========
Google,daily activities and other sites.
http://oracleinaction.com/clone-db-home/
http://www.onlinedbasupport.com/2010/12/10/step-by-step-cloning-database-10g-rac-to-rac/
http://ashishdba.blogspot.in/2012/10/node-addition-in-10g.html
Cold Cloning of Database
========================================================================
Source Database Name: TEST
Clone Database Name: TESTCLON
Source Database physical files path=/u01/TEST/oradata
Cloned Database physical files path=/u02/TESTCLON/oradata
Steps to be followed:
Startup the source database (if not open)
$ export ORACLE_SID=TEST
$ sqlplus / as sysdba
SQL> startup
Find out the path and names of datafiles, control files, and redo log files.
SQL> select name from v$datafile;
SQL> select member from v$logfile;
SQL> select name from v$controlfile;
Take the control file backup.
SQL> alter database backup controlfile to trace;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/home/oratest/testfile.trc';
Parameter file backup.
If ‘TEST’ database is using spfile,
SQL> create pfile=’/u02/TESTCLON/initTESTCLON.ora’ from spfile;
If database is using pfile, use OS command to copy the pfile to a backup location.
Shutdown the ‘TEST’ database
SQL> shutdown
Copy all data files, control files, and redo log files of ‘TEST’ database to a target database location.
$ mkdir /u02/TESTCLON/oradata
$ cp /u01/TEST/oradata/* /u02/TESTCLON/oradata/
Create appropriate directory structure in clone database for dumps and specify them in the parameter file.
$ mkdir -p /u02/TESTCLON/{bdump,udump}
Edit the clone database parameter file and make necessary changes to the clone database
$ cd /u02/TESTCLON/
$ vi initTESTCLON.ora
db_name=TESTCLON
control_files=/u02/TESTCLON/oradata/cntrl01.ctl
background_dump_dest=/u02/TESTCLON/bdump
user_dump_dest=/u02/TESTCLON/udump
*._no_recovery_through_resetlogs=TRUE
. . .
. . .
:wq!
Startup the clone database in NOMOUNT stage.
$ export ORACLE_SID=TESTCLON
SQL> startup nomount pfile=’/u02/TESTCLON/initTESTCLON.ora’
edit the control file as per requirement
=============================
CREATE CONTROLFILE SET DATABASE “TESTCLON” RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘/u02/TESTCLON/oradata/redo01.log’ SIZE 5M,
GROUP 2 ‘/u02/TESTCLON/oradata/redo02.log’ SIZE 5M,
DATAFILE
‘/u02/TESTCLON/oradata/system01.dbf’,
‘/u02/TESTCLON/oradata/undotbs01.dbf’,
‘/u02/TESTCLON/oradata/sysaux01.dbf’,
‘/u02/TESTCLON/oradata/users01.dbf’,
‘/u02/TESTCLON/oradata/example01.dbf’
CHARACTER SET AL32UTF8
;
Create the control file by running from the trace path
SQL> @u01/TEST/source/udump/cntrl.sql
Once the control file’s successfully created, open the database with resetlogs option.
SQL> alter database open resetlogs;
Add temporary files and post clone steps if required.
========================================================================
Cloning RAC Database (RAC-RAC)
- Install Oracle Clusterware on target server
- Install Oracle Database Software on target server
- Copy the Cold backup to target server shared location.
(If you are using hotbackup then also copy the archive log files. We require archivelog files from both instances for e.g : 1st Instance = log_652206883_34730_1.arc 2nd Instance =log_652206883_35888_2.arc to recover the database)
- Generate pfile from spfile on Production and transfer the file to target $ORACLE_HOME/dbs
- Edit the pfile to replace the entries of PROD to TEST/UAT/DEV. Also remove the below entries from the pfile (initERPUAT.ora)
*.compatible='10.2.0'
*.control_files='/uatdata/oracle/db/apps_st/data/cntrl01.dbf','/uatdata/oracle/db/apps_st/data/cntrl02.dbf','/uatdata/oracle/db/apps_st/data/cntrl03.dbf'
*.db_name='ERPUAT'
ERPUAT.instance_name='ERPUAT1'
ERPUAT.local_listener='ERPUAT1_LOCAL'
*.log_archive_dest_1='location=/uatarch/arch'
*.log_archive_format='log_%r_%s_%t.arc'
ERPUAT.remote_listener='ERPUAT_REMOTE'
ERPUAT.service_names='ERPUAT'
*.db_file_name_convert=(/oraproddata/oracle/db/apps_st/DATA,/uatdata/oracle/db/apps_st/DATA)
*.log_file_name_convert=(/oraproddata/oracle/db/apps_st/DATA,/uatdata/oracle/db/apps_st/DATA)
*._no_recovery_through_resetlogs=TRUE
*.control_files='/uatdata/oracle/db/apps_st/data/cntrl01.dbf','/uatdata/oracle/db/apps_st/data/cntrl02.dbf','/uatdata/oracle/db/apps_st/data/cntrl03.dbf'
*.db_name='ERPUAT'
ERPUAT.instance_name='ERPUAT1'
ERPUAT.local_listener='ERPUAT1_LOCAL'
*.log_archive_dest_1='location=/uatarch/arch'
*.log_archive_format='log_%r_%s_%t.arc'
ERPUAT.remote_listener='ERPUAT_REMOTE'
ERPUAT.service_names='ERPUAT'
*.db_file_name_convert=(/oraproddata/oracle/db/apps_st/DATA,/uatdata/oracle/db/apps_st/DATA)
*.log_file_name_convert=(/oraproddata/oracle/db/apps_st/DATA,/uatdata/oracle/db/apps_st/DATA)
*._no_recovery_through_resetlogs=TRUE
- Export the ORACLE_SID=ERPUAT and start the database in nomount state
- Create the control file or rename the datafiles (change the location of datafiles) if you want to keep the db name same
Note : If using hotbackup, once the control file is created recover the database using the below command.Provide necessary archivelog files.
To check which archivelog files to supply. Execute the below command on Source Database
To check which archivelog files to supply. Execute the below command on Source Database
SELECT SEQUENCE# , name FROM v$archived_log WHERE 5965684806935 BETWEEN FIRST_CHANGE# AND NEXT_CHANGE#;
Change 5965684806935 with your SCN
Recover database using backup controlfile until cancel and open the database with resetlogs
Recover database using backup controlfile until cancel and open the database with resetlogs
ALTER DATABASE OPEN resetlogs;
DATABASE altered.
DATABASE altered.
Create Temporary Tablespace
CREATE TEMPORARY tablespace TMP tempfile '/uatdata/oracle/db/apps_st/data/tmp01.dbf' SIZE 2048M;
Tablespace created.
ALTER DATABASE DEFAULT TEMPORARY tablespace TMP;
DATABASE altered.
Tablespace created.
ALTER DATABASE DEFAULT TEMPORARY tablespace TMP;
DATABASE altered.
Shutdown the database and export ORACLE_SID=ERPUAT1
Edit the pfile to add the entries (initERPUAT1.ora)
Edit the pfile to add the entries (initERPUAT1.ora)
*.cluster_database_instances=2
*.cluster_database=TRUE
*.compatible='10.2.0'
*.control_files='/uatdata/oracle/db/apps_st/data/cntrl01.dbf','/uatdata/oracle/db/apps_st/data/cntrl02.dbf' ,'/uatdata/oracle/db/apps_st/data/cntrl03.dbf'
*.db_name='ERPUAT'
ERPUAT1.instance_name='ERPUAT1'
ERPUAT2.instance_name='ERPUAT2'
ERPUAT1.instance_number=1
ERPUAT2.instance_number=2
ERPUAT1.local_listener='ERPUAT1_LOCAL'
ERPUAT2.local_listener='ERPUAT2_LOCAL'
*.log_archive_dest_1='location=/uatarch/arch'
*.log_archive_format='log_%r_%s_%t.arc'
ERPUAT1.remote_listener='ERPUAT_REMOTE'
ERPUAT2.remote_listener='ERPUAT_REMOTE'
ERPUAT1.service_names='ERPUAT'
ERPUAT2.service_names='ERPUAT'
ERPUAT1.thread=1
ERPUAT2.thread=2
*.undo_management='AUTO'# Required 11i setting
ERPUAT1.undo_tablespace='UNDOTBS1'# Required 11i setting
ERPUAT2.undo_tablespace='UNDOTBS2'# Required 11i setting
*.workarea_size_policy='AUTO'# Required 11i setting
*.cluster_database=TRUE
*.compatible='10.2.0'
*.control_files='/uatdata/oracle/db/apps_st/data/cntrl01.dbf','/uatdata/oracle/db/apps_st/data/cntrl02.dbf' ,'/uatdata/oracle/db/apps_st/data/cntrl03.dbf'
*.db_name='ERPUAT'
ERPUAT1.instance_name='ERPUAT1'
ERPUAT2.instance_name='ERPUAT2'
ERPUAT1.instance_number=1
ERPUAT2.instance_number=2
ERPUAT1.local_listener='ERPUAT1_LOCAL'
ERPUAT2.local_listener='ERPUAT2_LOCAL'
*.log_archive_dest_1='location=/uatarch/arch'
*.log_archive_format='log_%r_%s_%t.arc'
ERPUAT1.remote_listener='ERPUAT_REMOTE'
ERPUAT2.remote_listener='ERPUAT_REMOTE'
ERPUAT1.service_names='ERPUAT'
ERPUAT2.service_names='ERPUAT'
ERPUAT1.thread=1
ERPUAT2.thread=2
*.undo_management='AUTO'# Required 11i setting
ERPUAT1.undo_tablespace='UNDOTBS1'# Required 11i setting
ERPUAT2.undo_tablespace='UNDOTBS2'# Required 11i setting
*.workarea_size_policy='AUTO'# Required 11i setting
Start the 1st Node and create redologs for 2nd Node.
startup
ALTER DATABASE ADD logfile thread 2 GROUP 6 ('/uatdata/oracle/db/apps_st/data/redolog06a.dbf', '/uatdata/oracle/db/apps_st/data/redolog06b.dbf') SIZE 100M;
DATABASE altered.
ALTER DATABASE ADD logfile thread 2 GROUP 7 ('/uatdata/oracle/db/apps_st/data/redolog07a.dbf', '/uatdata/oracle/db/apps_st/data/redolog07b.dbf') SIZE 100M;
DATABASE altered.
ALTER DATABASE ADD logfile thread 2 GROUP 8 ('/uatdata/oracle/db/apps_st/data/redolog08a.dbf',
3 '/uatdata/oracle/db/apps_st/data/redolog08b.dbf') SIZE 100M;
DATABASE altered.
ALTER DATABASE enable thread 2;
DATABASE altered.
ALTER DATABASE ADD logfile thread 2 GROUP 6 ('/uatdata/oracle/db/apps_st/data/redolog06a.dbf', '/uatdata/oracle/db/apps_st/data/redolog06b.dbf') SIZE 100M;
DATABASE altered.
ALTER DATABASE ADD logfile thread 2 GROUP 7 ('/uatdata/oracle/db/apps_st/data/redolog07a.dbf', '/uatdata/oracle/db/apps_st/data/redolog07b.dbf') SIZE 100M;
DATABASE altered.
ALTER DATABASE ADD logfile thread 2 GROUP 8 ('/uatdata/oracle/db/apps_st/data/redolog08a.dbf',
3 '/uatdata/oracle/db/apps_st/data/redolog08b.dbf') SIZE 100M;
DATABASE altered.
ALTER DATABASE enable thread 2;
DATABASE altered.
Create spfile from pfile. Shutdown the database and place the spfile in a shared location
CREATE spfile FROM pfile;
File created.
shut immediate
DATABASE closed.
DATABASE dismounted.
ORACLE instance shut down.
File created.
shut immediate
DATABASE closed.
DATABASE dismounted.
ORACLE instance shut down.
Edit pfile on both the nodes to point it to the shared location
[orauat@erpuatdbs01 dbs]$ cat initERPUAT1.ora
spfile=/uatdata/oracle/db/apps_st/data1/spfileERPUAT.ora
[orauat@erpuatdbs02 dbs]$ cat initERPUAT2.ora
spfile=/uatdata/oracle/db/apps_st/data1/spfileERPUAT.ora
spfile=/uatdata/oracle/db/apps_st/data1/spfileERPUAT.ora
[orauat@erpuatdbs02 dbs]$ cat initERPUAT2.ora
spfile=/uatdata/oracle/db/apps_st/data1/spfileERPUAT.ora
Start the instance on both the nodes.
USE srvctl TO SET the environements
srvctl ADD DATABASE -d ERPUAT -o /orauatbin/oracle/db/tech_st/10.2.0 -m mbcgroup.LOCAL -p spfile=/uatdata/oracle/db/apps_st/data1/spfileERPUAT.ora -y AUTOMATIC
srvctl ADD instance -d ERPUAT -i ERPUAT1 -n ERPUATDBS01
srvctl ADD instance -d ERPUAT -i ERPUAT2 -n ERPUATDBS02
srvctl setenv DATABASE -d ERPUAT -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT1_erpuatdbs01'
srvctl setenv instance -d ERPUAT -i ERPUAT1 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT1_erpuatdbs01'
srvctl setenv instance -d ERPUAT -i ERPUAT2 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT2_erpuatdbs02'
srvctl setenv nodeapps -n ERPUATDBS01 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT1_erpuatdbs01'
srvctl setenv nodeapps -n ERPUATDBS02 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT2_erpuatdbs02'
srvctl ADD DATABASE -d ERPUAT -o /orauatbin/oracle/db/tech_st/10.2.0 -m mbcgroup.LOCAL -p spfile=/uatdata/oracle/db/apps_st/data1/spfileERPUAT.ora -y AUTOMATIC
srvctl ADD instance -d ERPUAT -i ERPUAT1 -n ERPUATDBS01
srvctl ADD instance -d ERPUAT -i ERPUAT2 -n ERPUATDBS02
srvctl setenv DATABASE -d ERPUAT -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT1_erpuatdbs01'
srvctl setenv instance -d ERPUAT -i ERPUAT1 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT1_erpuatdbs01'
srvctl setenv instance -d ERPUAT -i ERPUAT2 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT2_erpuatdbs02'
srvctl setenv nodeapps -n ERPUATDBS01 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT1_erpuatdbs01'
srvctl setenv nodeapps -n ERPUATDBS02 -t 'TNS_ADMIN=/orauatbin/oracle/db/tech_st/10.2.0/network/admin/ERPUAT2_erpuatdbs02'
Check the status of the instance using ./crs_stat -t
[root@erpuatdbs01 ~]# cd /orauatbin/10gcrs/10.2.0/crs/bin/
[root@erpuatdbs01 bin]# ./crs_stat -t
Name Type Target State Host
---------------------------------------------------------------------
ora....T1.inst application FFLINE OFFLINE
ora....T2.inst application OFFLINE OFFLINE
ora.ERPUAT.db application OFFLINE OFFLINE
ora....01.lsnr application ONLINE ONLINE erpuatdbs01
ora....s01.gsd application ONLINE ONLINE erpuatdbs01
ora....s01.ons application ONLINE ONLINE erpuatdbs01
ora....s01.vip application ONLINE ONLINE erpuatdbs01
ora....02.lsnr application ONLINE ONLINE erpuatdbs02
ora....s02.gsd application ONLINE ONLINE erpuatdbs02
ora....s02.ons application ONLINE ONLINE erpuatdbs02
ora....s02.vip application ONLINE ONLINE erpuatdbs02
[root@erpuatdbs01 bin]#
[root@erpuatdbs01 bin]# ./crs_stat -t
Name Type Target State Host
---------------------------------------------------------------------
ora....T1.inst application FFLINE OFFLINE
ora....T2.inst application OFFLINE OFFLINE
ora.ERPUAT.db application OFFLINE OFFLINE
ora....01.lsnr application ONLINE ONLINE erpuatdbs01
ora....s01.gsd application ONLINE ONLINE erpuatdbs01
ora....s01.ons application ONLINE ONLINE erpuatdbs01
ora....s01.vip application ONLINE ONLINE erpuatdbs01
ora....02.lsnr application ONLINE ONLINE erpuatdbs02
ora....s02.gsd application ONLINE ONLINE erpuatdbs02
ora....s02.ons application ONLINE ONLINE erpuatdbs02
ora....s02.vip application ONLINE ONLINE erpuatdbs02
[root@erpuatdbs01 bin]#
As db user execute the following commands and shut the database on both instances
Shutdown immedaite
srvctl START DATABASE -d ERPUA
srvctl stop datbase -d ERPUAT
srvctl START DATABASE -d ERPUAT
srvctl START DATABASE -d ERPUA
srvctl stop datbase -d ERPUAT
srvctl START DATABASE -d ERPUAT
As a root user or cluster user check the status of the database using ./crs_stat -t
Name Type Target State Host
-----------------------------------------------------------
ora....T1.inst application ONLINE ONLINE erpuatdbs01
ora....T2.inst application ONLINE ONLINE erpuatdbs02
ora.ERPUAT.db application ONLINE ONLINE erpuatdbs01
ora....01.lsnr application ONLINE ONLINE erpuatdbs01
ora....s01.gsd application ONLINE ONLINE erpuatdbs01
ora....s01.ons application ONLINE ONLINE erpuatdbs01
ora....s01.vip application ONLINE ONLINE erpuatdbs01
ora....02.lsnr application ONLINE ONLINE erpuatdbs02
ora....s02.gsd application ONLINE ONLINE erpuatdbs02
ora....s02.ons application ONLINE ONLINE erpuatdbs02
ora....s02.vip application ONLINE ONLINE erpuatdbs02
-----------------------------------------------------------
ora....T1.inst application ONLINE ONLINE erpuatdbs01
ora....T2.inst application ONLINE ONLINE erpuatdbs02
ora.ERPUAT.db application ONLINE ONLINE erpuatdbs01
ora....01.lsnr application ONLINE ONLINE erpuatdbs01
ora....s01.gsd application ONLINE ONLINE erpuatdbs01
ora....s01.ons application ONLINE ONLINE erpuatdbs01
ora....s01.vip application ONLINE ONLINE erpuatdbs01
ora....02.lsnr application ONLINE ONLINE erpuatdbs02
ora....s02.gsd application ONLINE ONLINE erpuatdbs02
ora....s02.ons application ONLINE ONLINE erpuatdbs02
ora....s02.vip application ONLINE ONLINE erpuatdbs02
Node Addition in 10g
====================
1. run cluvfy to verify
2. from node1, execute
$/u01/app/crs11g/oui/bin/addNode.sh
3. specify node2 vip address and follow instructions.
4. in the last of installtion it may through an wornig and will ask to click on YES. click on YES
5. from node1,
$ /u01/app/crs11g/bin/racgons add_config node2:6251
6. from Node1,set ORACLE_HOME=ASM_HOME and then execute addNode.sh from $ASM_HOME/oui/bin and Follow instrusctions.
7. From node1, set ORACLE_HOME=DB_HOME and then
$/u01/app/oracle/product/11.1.0/db_1/oui/bin/addNode.sh
and Follow instructions.
8. from node2 start NETCA and configure listener
9. from node1 start dbca from ASM Home to configure ASM instance
10. Again from node1 start dbca from DB Home to add DB instance
11. Then From Node1, start OEM and add node2 database instance with OEM.
12. from Server tab click on addinstance and follow the instruction.
================================================================================
11g R2 RAC: ADD A NODE
================================================================================
11g R2 RAC has extremely simplified the process of adding a node to the cluster as compared to 10g RAC or 11g R1 RAC.
Now, after the introduction of SCAN and GPNP, we need to follow very simple steps.
Current scenario:
I have two nodes in the cluster presently.
Host names :
- host01.example.com
- host02.example.com
Node to be added :
- host03.example.com
Procedure:
————————————
Prepare the machine for third node
————————————
— Set kernel parameters
— Install required rpm’s
— Create users/groups
— configure oracleasm
root@host03#oracleasm configure -i
oracleasm exit
oracleasm init
oracleasm scandisks
oracleasm listdisks
all asm disks will be listed
– Configure ssh connectivity for grid user among all 3 nodes –
– On node3 as grid user
[grid@host03 .ssh]$ssh-keygen -t rsa
ssh-keygen -t dsa
cd /home/grid/.ssh
cat *.pub > host03
scp host03 host01:/home/grid/.ssh/
[grid@host03 .ssh] $ssh host01
– Enter password
[grid@host01 ~]$cd /home/grid/.ssh
cat host03 >> authorized_keys
scp authorized_keys host02:/home/grid/.ssh/
scp authorized_keys host03:/home/grid/.ssh/
– Test ssh connectivity on all 3 nodes as grid user –
– run following on all 3 nodes twice as grid user–
echo ssh host01 hostname >> a.sh
echo ssh host02 hostname >> a.sh
echo ssh host03 hostname >> a.sh
echo ssh host01-priv hostname >> a.sh
echo ssh host02-priv hostname >> a.sh
echo ssh host03-priv hostname >> a.sh
chmod +x a.sh
./a.sh
– Run cluster verify to check that host03 can be added as node –
grid host01# cluvfy stage -pre crsinst -n host03 -verbose
– if time synchronization problem, restart ntpd service on each node
– Error grid is not a member of dba group – ignore
grid@host01 ~]$. oraenv –+ASM1
[grid@host01 ~]$ cd /u01/app/11.2.0/grid/oui/bin
– Add node
[grid@host01 bin]$./addNode.sh -silent “CLUSTER_NEW_NODES={host03}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={host03-vip}”
– Execute oraInstroot.sh and root.sh on node3 as root –
[root@host03]#/u01/app/oraInventory/oraInstroot.sh
/u01/app/11.2.0/grid/root.sh
– check from host01 that node has been added –
host01 grid > crsctl stat res -t
– Start any resources if they are not up already –
host01 grid > crsctl start resource <resource name>
After you have added a node, in case you want to clone database home on that node, please click here.
================================================================================
11g R2 RAC: CLONE DATABASE HOME
================================================================================- Two node RAC setup.
- Both the current nodes have non shared database home.
- Current two node setup is not able to handle increased workload.
- Want to add third node to the setup.
Need to
. Install oracle clusterware on the third node
. Install database software on the third node (Since database home is non shared).
Here is the detailed procedure which needs to be implemented:
Preparing to Clone Oracle RAC
=============================
On one of the existing nodes (node1 say):
Step 1 Create a backup of the source home
Create a copy of the Oracle RAC home. You will use this file to copy the Oracle RAC home to each node in the cluster
When creating the backup (tar) file, the best practice is to include the release number in the name of the file. For example:
– Go to ORACLE_HOME –
[root@node1 root]# cd /u01/app/oracle/product/11.2.0/dbhome_1
– create the backup(tar) file in /home/oracle with name db1120.tgz
[root@node1 dbhome_1]# tar -zcvf /home/oracle/db1120.tgz .
Step 2 Install and start Oracle Clusterware (Use addnode.sh to add new node in existing cluster)
Before you can use cloning to create a new Oracle RAC home, you must first install and start Oracle Clusterware on the node on which you want to copy a cloned Oracle RAC home. In other words, you configure an Oracle RAC home that you cloned from a source cluster onto the nodes in a target cluster in the same order that you installed the Oracle Clusterware and Oracle RAC software components on the original nodes.
————————————————
Deploying Oracle RAC Database Homes
————————————————
Deploying the Oracle RAC database home to a cluster is a multiple-step process.
This section provides step-by-step instructions that describe how to:
Prepare the new cluster node
Deploy the Oracle RAC database softwar
Run the clone.pl script on each node
Run the $ORACLE_HOME/root.sh script on each node
Step 1 Prepare the new cluster nodes
Perform the Oracle RAC preinstallation steps, including such things as:
· Specify the kernel parameters.
· Use short, nondomain-qualified names for all names in the Hosts file.
· Verify that you can ping the public and interconnect names.
· Ensure Oracle Clusterware is active.
· Ensure that Oracle ASM is active and there at least one Oracle ASM disk group exists and is mounted.
See your platform-specific Oracle RAC installation guide for a complete preinstallation checklist.
Step 2 Deploy the Oracle RAC database software on third node
To deploy the Oracle RAC software, you need to:
1. Restore the Oracle home to the third node.
[root@node1 root]# scp /home/oracle/db1120.tgz node3:/home/oracle/db1120.tgz
2. create the directory for oracle home
[root@node3 root]# mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1
3. [root@node3 root]# cd /u01/app/oracle/product/11.2.0/dbhome_1
4. Extract the contents of the tar file
[root@node3 dbhome_1]# tar -zxvf /home/oracle/db1120.tgz
When providing the home location and path_name, the home location can be in the same directory path or in a different directory path from the source home that you used to create the tar.
5. Change the ownership of all files to the oracle and oinstall group. For example:
[root@node3 dbhome_1]# chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/dbhome_1
Step 3 Create a script file start.sh as oracle user on node3 in folder $ORACLE_HOME/clone/bin with following environment variables.
[oracle@node3 dbhome_1]$cd $ORACLE_HOME/clone/bin
vi start.sh
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
cd $ORACLE_HOME/clone
THISNODE=`hostname -s`
E01=ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
E02=ORACLE_HOME_NAME=OraDBRAC
E03=ORACLE_BASE=/u01/app/oracle
C01=”-O’CLUSTER_NODES={node1, node2, node3}'”
C02=”-O’LOCAL_NODE=$THISNODE'”
perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02
THISNODE should be set to the name of the node to be added e.g. node3
Step 4 Run start.sh script as oracle user on node3(owner of database home) from one of existing cluster node where we have database binaries
. This script will copy database binaries from existing node to the node specified in THISNODE variable as well as update inventories on all cluster nodes.
oracle@node3 bin]$chmod u+x start.sh
./start.she
Step 5 Run the $ORACLE_HOME/root.sh script on each node
- check the contents of the inventory.xml file, we will see that it has information about the 11g Oracle Home
[oracle@node3 dbhome_1]$cd /u01/app/oracle/oraInventory/ContentsXML
vi inventory.xml
- check the contents of the inventory.xml file on the other nodes. If entry for newly added node for database home does not exist, add it manually
[oracle@node1 dbhome_1]$vi /u01/app/oracle/oraInventory/ContentsXML/inventory.xml
[oracle@node2 dbhome_1]$vi /u01/app/oracle/oraInventory/ContentsXML/inventory.xml
=========
Google,daily activities and other sites.
http://oracleinaction.com/clone-db-home/
http://www.onlinedbasupport.com/2010/12/10/step-by-step-cloning-database-10g-rac-to-rac/
http://ashishdba.blogspot.in/2012/10/node-addition-in-10g.html
No comments :
Post a Comment