The generic steps to follow when adding the new node to the cluster are:
- Install Operating System
- Install required software
- Add/modify users and groups required for the installation
- Configure network
- Configure kernel parameters
- Configure services required such as NTP
- Configure storage (multipathing, zoning, storage discovery, ASMLib?)
We assume that our OS,storage and network is already configured.
Below is /etc/hosts entry after creating the third node
[root@rac3 ~]# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost # Public 192.168.56.71 rac1.localdomain rac1 192.168.56.72 rac2.localdomain rac2 192.168.56.73 rac3.localdomain rac3 # Private 192.168.10.1 rac1-priv.localdomain rac1-priv 192.168.10.2 rac2-priv.localdomain rac2-priv 192.168.10.3 rac3-priv.localdomain rac3-priv # Virtual 192.168.56.81 rac1-vip.localdomain rac1-vip 192.168.56.82 rac2-vip.localdomain rac2-vip 192.168.56.83 rac3-vip.localdomain rac3-vip # SCAN #192.168.56.91 rac-scan.localdomain rac-scan #192.168.56.92 rac-scan.localdomain rac-scan #192.168.56.93 rac-scan.localdomain rac-scan
As we can there is only two configured till now
[oracle@rac1 ~]$ olsnodes -n -i -t rac1 1 rac1-vip Unpinned rac2 2 rac2-vip Unpinned
Also check
- /etc/sysconfig/selinux to ensure that SELinux is in the required state (permissive in my case)
- chkconfig iptables –list to ensure that the local firewall is either off, or-in combination with iptables -L-uses the correct settings
- NTP configuration in /etc/sysconfig/ntpd must include the “-x” flag. If it’s not there, add it and restart NTP
Run cluster verify to check that host03 can be added as node
[grid@rac3 ~]$ $GRID_HOME/bin/cluvfy stage -pre nodeadd -n rac3 -fixup -fixupnoexec Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "rac3" Checking user equivalence... User equivalence check passed for user "grid" Package existence check passed for "cvuqdisk" Checking CRS integrity... CRS integrity check passed Clusterware version consistency passed. Checking shared resources... Checking CRS home location... Location check passed for: "/u01/app/12.1.0.1/grid" Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... << output is truncated>> NOTE: No fixable verification failures to fix Pre-check for node addition was successful on all the nodes.
Run the addnode.sh to add the node
[grid@rac3 ~]$ export IGNORE_PREADDNODE_CHECKS=Y [grid@rac3 ~]$ cd $GRID_HOME/oui/bin [grid@rac3 ~]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 1726 MB Passed Checking swap space: must be greater than 150 MB. Actual 767 MB Passed [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2016-07-20_11-51-00PM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2016-07-20_11-51-00PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. Prepare Configuration in progress. Prepare Configuration successful. .................................................. 9% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2016-07-20_11-51-00PM.log Instantiate files in progress. Instantiate files successful. .................................................. 15% Done. Copying files to node in progress. Copying files to node successful. .................................................. 79% Done. Saving cluster inventory in progress. .................................................. 87% Done. Saving cluster inventory successful. The Cluster Node Addition of /u01/app/12.1.0.1/grid was successful. Please check '/tmp/silentInstall.log' for more details. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/12.1.0.1/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [rac3] Execute /u01/app/12.1.0.1/grid/root.sh on the following nodes: [rac3] The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes. .......... Update Inventory in progress. .................................................. 100% Done. Update Inventory successful. Successfully Setup Software. [grid@rac3 addnode]$
Execute oraInstroot.sh on node3 as root
[root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete.
Execute root.sh on node3 as root
[root@rac3 addnode]# /u01/app/12.1.0.1/grid/root.sh << output truncated >> CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac2' CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac3' CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2016/07/21 00:10:23 CLSRSC-343: Successfully started Oracle clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2016/07/21 00:11:03 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
To verify the cluster is added or not
[oracle@rac1 ~]$ olsnodes -n -i -t rac1 1 rac1-vip Unpinned rac2 2 rac2-vip Unpinned rac3 3 rac3-vip Unpinned
we can also check as below
[root@rac3 ~]# crsctl check cluster -all ************************************************************** rac1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac3: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@rac3 ~]#
Thanks for reading this post, In the next article we can add database home in new node.