Goal of this exercise is to clone Oracle database that is using Grid / Clusterware components to a separate server.

Although you can find some articles on the Internet, problem is they are covering how to change the host name and/or IP address in case of Oracle Restart.

This article will show you much more: how to clone complete Oracle database, that is using Oracle Grid parts of infrastructure to a new server.

First I want to explain why you would do that?

First reason could be that you want to have identical environment as production.
Identical means in terms of production data and production configuration, not the CPU (number of CPU cores can be lower). Here I suggest to leave memory as is.

Having the same environment (besides the number of CPU cores) allows you to test database upgrade scenario.

You can also perform performance test as you have complete set of data.

Or you need to setup a new environment for a new customer where you will clean some data while majority of the other settings will remain the same.

Whatever the reason you have, you have several options, which can be further reduced to only two:
– to install Oracle SW (Grid & Db) along with a new server setting, create ASM disks and to restore database from the backup on the new server.

Other variants of this option influence the production system in terms of performance overhead which is not acceptable in mission critical systems (for calm and small databases you can use such variants).
For that reason I’ve eliminated that option.

– to perform VM clone (LPAR in case of IBM AIX) and to clone all VG (volume groups) at some point in time.
The main advantage of this option is that you get ready OS with all of the SW same as in production , and disk mounted.

In case of regular database that is using the regular file system (no ASM/Grid/Clusterware), that is all you have to do.

Unfortunately not in case of Oracle Grid/Clusterware based databases like RAC, RAC One Node and even Oracle Restart, as you are about 100+ commands away from the finish.

This is where my journey begin.

In this blog I’ll describe only the main points you need to do when using storage based cloning of the production database that is running all the time with no interruption.

This example will be based on IBM AIX OS and Oracle Restart database.
Although Oracle Restart is the simplest case of Oracle database that is using Grid and Clusterware infrastructure, it is not much simpler than the regular RAC or RAC One Node, that I also hope to cover once in the future (main difference is more Grid/Clusterware services along with a virtual private network).
When I login into the cloned host, first I checked to see what services are up and running by executing the following command:


crsctl stat res -t

As the command returns error message, I was aware it won’t be that easy as I thought, as Grid and Clusterware are deeply integrated with the OS and now I have a new IP address and a new hostname, which are the basis for many services provided by Grid&Clusterware.
As everything was dead, you need to follow the steps provided below:


1. 
First step is to de-configure Oracle Restart. To do that execute the following as a root user:

./roothas.pl -deconfig -force


2. 
Configure the CSS & OHAS services as root user as follows (as a root user):

./roothas.pl


3.
Please perform the next steps as grid OS user (as the Grid Infrastructure OS owner): 

crsctl modify resource "ora.cssd" -init -attr "AUTO_START=1"


4.
Stop has

crsctl stop has



5.
Restart the OHAS stack as grid or oracle OS user (as the Grid Infrastructure OS owner):

crsctl start has


6.
Check the CSS & OHAS state:

crsctl check has

crsctl check css

crsctl stat resource

crsctl stat res -t


cssd should be ONLINE (and evmd)

7.
Re-create the default listener (LISTENER) using port 1521 (or using your desired port), thru the NETCA GUI located on the new Grid Infrastructure Oracle Home (or manually if you do not have graphical access) as grid or oracle OS user (as the Grid Infrastructure OS owner):

srvctl add listener

srvctl start listener

CRS-2674: Start of 'ora.LISTENER.lsnr' on 'hostname' failed

As you can see, I got the error and listener was not started.

By executing the following command, I can  reveal what's behind the error.

lsnrctl start listener

TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production
System parameter file is /network/admin/listener.ora
Log messages written to /xxx/xxx/oracle/diag/tnslsnr/hostname/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=old_hostname)(PORT=1521)))
TNS-12545: Connect failed because target host or object does not exist
 TNS-12560: TNS:protocol adapter error
  TNS-00515: Connect failed because target host or object does not exist
   IBM/AIX RISC System/6000 Error: 68: Can't assign requested address

Listener failed to start. See the error message(s) above...


To fix this, follow the steps:

cd , and then execute:

vi listener.ora 

(ADDRESS = (PROTOCOL = TCP)(HOST = OLD_SERVER_NAME)(PORT = 1521)) ----> (ADDRESS = (PROTOCOL = TCP)(HOST = NEW_SERVER_NAME)(PORT = 1521))


Save the file, and try to start the listener again.


srvctl start listener

This time you should have running listener. The following command will confirm that:

srvctl status listener

Listener LISTENER is enabled
Listener LISTENER is running on node(s): new_hostname




8. 
Now it's a time to create init+ASM.ora file on the <11.2 Grid Infrastructure Oracle Home>/dbs directory with the next parameters:

asm_diskgroups=  	

asm_diskstring= '/dev/oracleasm/disks/*'  

instance_type='asm'

large_pool_size=12M




9.
Add the ASM instance as grid or oracle OS user (as the Grid Infrastructure OS owner) by executing the following:

srvctl add asm



10.
Enable ASM instance Auto Start as grid or oracle OS user (as the Grid Infrastructure OS owner) as follow: 

<11.2 Grid Infrastructure Oracle Home>/bin/crsctl modify resource "ora.asm" -attr "AUTO_START=1"  


11.
Make sure the disks are discovered by kfod as grid or oracle OS user (as the Grid Infrastructure OS owner) as follow:

<11.2 Grid Infrastructure Oracle Home>/bin/kfod asm_diskstring='ORCL:*' disks=all
or 
<11.2 Grid Infrastructure Oracle Home>/bin/kfod asm_diskstring='/*' disks=all


[root@+ASM@hostname:/] lspv
hdisk0          00fa0d66205e7c3b                    rootvg          active      
hdisk32         00fa0d6625405efc                    logsVG       active      
hdisk33         00fa0d662535fd8b                    dataVG           active      
hdisk34         none                                None                        
hdisk35         none                                None                        
hdisk36         none                                None                        
hdisk37         none                                None                        
hdisk38         none                                None                        
hdisk39         none                                None                        
hdisk40         none                                None                        
hdisk41         none                                None                        
hdisk42         none                                None                        
hdisk43         none                                None                        
hdisk44         none                                None                        
hdisk45         none                                None                        
hdisk46         none                                None                        


Change the permission for the ASM disks:

chown oracle:oracle /dev/rhdisk34
chmod 660 /dev/rhdisk34

chown oracle:oracle /dev/rhdisk35
chmod 660 /dev/rhdisk35
...

After that I get the expected result:

kfod asm_diskstring='ORCL:*' disks=all
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME                                                          
================================================================================
      +ASM                      



Now you should be able to start the ASM instance.


12.
Validate that the candidate disks are being discovered:

SYS@+ASM> select  path  from v$asm_disk;

PATH
----------------------------------------------------------------------------------------------------
/dev/rhdisk34
/dev/rhdisk35
/dev/rhdisk36
/dev/rhdisk37
/dev/rhdisk38


13.
And create spfile (this part I assume every decent DBA should be able to create).



14. 
Add the new ASM spfile and listener to the new ASM instance resource:

$> <11.2 Grid Infrastructure Oracle Home>/bin/srvctl modify asm  -p 

$> <11.2 Grid Infrastructure Oracle Home>/bin/srvctl modify asm  -l LISTENER



15.
At the end, you need to add database as an resource under the control of the Grid infrastructure.

srvctl add database -d db_name -o  -p  -a "data,fra,temp"

srvctl add service -s service_name -d db_name -P BASIC

srvctl start database -d db_name

I suggest you to rename the database and change the DB ID, but won’t be described here due to the too many steps.

Generally, it’s standard procedure with addition of deleting/renaming/adding Clusterware resources.



Get notified when a new post is published!

Loading

Comments

Kumar Nishat
2020-03-02 17:44:41
Hello, I want to know if these are valid for 12.1.0.2 GI/Clusterware. I am doing a similar cloning and need help on this. This post helps.
Josip Pojatina
2020-03-02 22:34:03
I didn't performed it for 12+. There are some changes in 12c, but idea and majority of steps should be similar.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.