In previous article we added the grid home in third home (i.e rac3): Click Here

Now we are going to extend the database home in 3rd home.

From an existing node i.e rac1 – as the database software owner – run the following command to extend the Oracle database software to the new node “rac3

[oracle@rac1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1/
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "rac1"

Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING:
Node "rac3" already appears to be part of cluster
Pre-check for node addition was successful.
Starting Oracle Universal Installer...

<< output truncated>>

Copying to remote nodes (Tuesday, December 24, 2016 2:22:40 PM IST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Tuesday, December 24, 2016 2:36:10 PM IST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/12.1.0.1/dbhome_1/root.sh #On nodes rac3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node

The Cluster Node Addition of /u01/app/oracle/product/12.1.0.1/dbhome_1 was successful.
Please check '/tmp/silentInstall.log' for more details.

Now run root.sh on rac3 node

[root@rac3 ~]# /u01/app/oracle/product/12.1.0.1/dbhome_1/root.sh

Post Installation steps

From a node with an existing instance of “orcl” issue the following commands to create the needed public log thread, undo tablespace, and “init.ora” entries for the new instance

From RAC1 node

SQL> alter database add logfile thread 3 group 5 ('+DATA') size 50M, group 6 ('+DATA') size 50M;

Database altered.

SQL> alter database enable public thread 3;

Database altered.

SQL> create undo tablespace undotbs3 datafile '+DATA' size 200M autoextend on;

Tablespace created.

SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='orcl3';

System altered.

SQL> alter system set instance_number=3 scope=spfile sid='orcl3';

System altered.

SQL> alter system set cluster_database_instances=3 scope=spfile sid='*';

System altered.

Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – “orcl3” – being added to the “orcl” cluster database. Add “orcl3” instance to the “orcl” database and verify

[oracle@rac3 bin]$ srvctl add instance -d orcl -i orcl3 -n rac3
[oracle@rac3 bin]$ srvctl status database -d orcl -v
Instance orcl1 is running on node rac1.
Instance orcl2 is running on node rac2.
Instance orcl3 is not running on node rac3.

Start the Instance
Now that all the prerequisites have been satisfied and OCR updated, the “orcl3” instance will be started. Start the newly created instance – “orcl3” – and verify

[oracle@rac3 ~]$ srvctl start instance -d orcl -i orcl3
[oracle@rac1 ~]$ srvctl status database -d RAC -v
Instance RAC1 is running on node rac1. Instance status: Open.
Instance RAC2 is running on node rac2. Instance status: Open.
Instance RAC3 is running on node rac3. Instance status: Open.
[oracle@rac1 ~]$
SQL> select inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;

INST_ID INSTANCE_NAME STATUS START_TIME
---------- ---------------- ------------ -----------------------------
1 ORCL1 OPEN 16-AUG-2016 03:27:08
2 ORCL2 OPEN 16-AUG-2016 03:36:37
3 ORCL3 OPEN 16-AUG-2016 03:36:00

So we can add database home successfully to 3rd node. In the next we can show how to delete the instance in rac.

 

 

 

Comments

  1. satya

    Hi ,

    Please help with the following error:(addnode.sh oracle_home)
    Node has been added successfully to the cluster.But when we try to send oracle home binaries from existing node using addnode.sh
    It says:
    ========================================================
    INFO: The new nodes ‘lnx03’ are already part of the cluster.
    SEVERE: The new nodes ‘lnx03’ are already part of the cluster.
    INFO: Alert Handler not registered, using Super class functionality
    INFO: Alert Handler not registered, using Super class functionality
    INFO: User Selected: Yes/OK

    INFO: Shutting down OUISetupDriver.JobExecutorThread
    SEVERE: [FATAL] [INS-10008] Session initialization failed
    CAUSE: An unexpected error occured while initializing the session.
    ACTION: Contact Oracle Support Services or refer logs
    SUMMARY:
    ==========================================================
    Background:
    lnx03 was earlier part of cluster and re imaged due to disk failures and it is in sync with other nodes in every aspect.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.