I must thank my fellow DBA Franky Weber Faust for his publication in his blog.
Objective: Describe the advantages of separating OCR and RV and also present the process needed to do this.
NOTE: Although the procedure was performed in version 11gR2, the steps are the same for version 12c.
In this article, we will cover an advanced topic and to begin I will explain what OCR and Voting Disk are.
Oracle Clusterware has very important components, one manages the cluster configuration and the other manages the nodes that are part of the cluster.
The Oracle Cluster Registry, or OCR, is responsible for managing the configurations and features of Oracle Clusterware and Oracle RAC.We can also include the Oracle Local Registry (OLR) here, it resides on each cluster node and locally manages Oracle Clusterware configurations.
The Voting Disk is where you can store voting files of the cluster that manages information about the nodes that are part of the cluster. For a node to be a member of the cluster this node must have access to the voting files.
OCR and Voting Disk can be stored in Oracle ASM or in a shared Filesystem. The direction of Oracle is to use ASM for this.
Now we know what these components are, but we still do not know how they work.
OCR has the role of knowing the resources of the cluster, such as: ASM instances, database instances, diskgroups, SCAN Listeners, VIPs, Nodeapps, etc. It has the information of which feature is active or inactive, which nodes are in the cluster configuration. When a node is added or removed from the cluster it is in the OCR that this information gets recorded. It is the center of cluster information and must be in storage shared by all nodes, can have up to 4 mirrors, can be managed through the “ocrconfig”, “ocrdump” and “ocrcheck” utilities, it is preferable to use these as Root. OCR can be managed from any cluster node and its backups run automatically every 4 hours. Oracle Clusterware retains the last 3 backups plus the last daily and the last weekly. You can not change the hold or frequency of automatic backups, and these are always done physically. OCR backups can be run manually as physical or logical.
Basically, voting files are used so the cluster knows which nodes are available at any given time. All nodes in the cluster are recording every 5 seconds acknowledgment messages in the voting files to inform the cluster that they are available. If these messages are not written for 30 seconds (default time in Linux), the node that can not communicate is temporarily removed from the cluster until it communicates with the others. The voting disk, where the voting files are stored, is the center of the ping of nodes, it can have multiple mirrors and can also be managed from any cluster node. Voting disk backups are manual and should be included in your backup routines.Operations on the voting disk must be run as root. Up to version 11gR2 after any addition or removal of a node in the cluster a backup of the RV must be done. With the information of the VD Clusterware decides who is part of the cluster and manages the type of operation performed ( election / eviction / split brain ).
OCR and Voting Disk are essential components for running Oracle Clusterware. Without any of these, your entire cluster will stop working.During the process of installing and configuring the Grid Infrastructure, we have the option to choose only one disk-group to store the Clusterware information. This means that if we lose this diskgroup, we will lose both OCR and Voting Disk. The process to recover each one is different, so if we can simplify it means that we will have a faster recovery, therefore less downtime. The process outlined here shows how to separate OCR and Voting Disk into different disk groups .
So that we can separate them into distinct diskgroups we will initially have OCR and VD in a single diskgroup , since we have no other option during Grid Infrastructure installation. Let’s check the current layout then.
[root@clusterware1 ~] # ocrcheck Status of Oracle Cluster Registry is as follows: Version: 3 Total space (kbytes): 262120 Used space (kbytes): 2856 Available space (kbytes): 259264 ID: 628337282 Device / File Name: + CONFIG Device / File integrity check succeeded Device / File not configured Device / File not configured Device / File not configured Device / File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
[root@clusterware1 ~] # crsctl query css ## STATE File Universal Id File Name Disk group Eur-lex.europa.eu eur-lex.europa.eu 1. ONLINE 02aa40fc09384f14bf26cef30fca02b9 (/dev/oracleasm/disks/CONFIG1) [CONFIG] 2. ONLINE 45b54bb316a84ff0bfec3c6faa4bc142 (/dev/oracleasm/disks/CONFIG2) [CONFIG] 3. ONLINE 47a0e02aa9434fd2bf9dcac8ae2f12e3 (/dev/oracleasm/disks/CONFIG3) [CONFIG] Located 3 voting disk (s).
We can see that the VD and OCR are stored in the disk-group CONFIG.
Let’s add new disks so we can create the new disk-groups.
Figure 1. Add a new disk in the SATA controller
Figure 2 – Select the desired format
Figure 3 – Choose “Fixed Size” to create the fixed size disk
Figure 4 – Define a name and location to store your disk. 1 GB is sufficient for this exercise
Figure 5 – Repeat the same process for the other disks. Create vd2.vdi, vd3.vdi, ocr1.vdi, ocr2.vdi, and ocr3.vdi
Figure 6 – After creating the rest of the disks, open the “Virtual Media Manager” or “Virtual Media Manager”
Figure 7 – Select one of the created disks and click on “Modify”
Figure 8 – Set the disk to “Shareable”
Repeat the procedure for all other newly created disks: ocr2.vdi, oc3.vdi, vd1.vdi, vd2.vdi, and vd3.vdi.
Figure 9 – After changing the disks, add them to the other node of your cluster
Figure 10 – Choose “Choose existing disk”
Figure 11 – Choose the disc
Figure 12 – Repeat the procedure for the other disks: vd2.vdi, vd3.vdi, ocr1.vdi, ocr2.vdi and ocr3.vdi
Start only one of the nodes , because we will partition them and then configure them in ASM.
[root@clusterware1 ~] # fdisk -l Disk / dev / sda: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors / track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00062c2a Boot Start Device End Blocks Id System / Dev / sda1 * 1 26 204800 83 Linux Partition 1 does not end on cylinder boundary. / Dev / sda2 26 679 5242880 83 Linux Partition 2 does not end on cylinder boundary. / Dev / sda3 679 1070 3145728 82 Linux swap / Solaris / Dev / sda4 1070 2611 12377088 5 Extended / Dev / sda5 1071 2611 12376064 83 Linux Disk / dev / sdb: 2147 MB, 2147483648 bytes 255 heads, 63 sectors / track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x77fc4eb9 Boot Start Device End Blocks Id System / Dev / sdb1 1 261 2096451 83 Linux Disk / dev / sdc: 2147 MB, 2147483648 bytes 255 heads, 63 sectors / track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0xc559b074 Boot Start Device End Blocks Id System / Dev / sdc1 1 261 2096451 83 Linux Disk / dev / sdd: 2147 MB, 2147483648 bytes 255 heads, 63 sectors / track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x9a582402 Boot Start Device End Blocks Id System / Dev / sdd1 1 261 2096451 83 Linux Disk / dev / sde: 1073 MB, 1073741824 bytes 255 heads, 63 sectors / track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk / dev / sdf: 1073 MB, 1073741824 bytes 255 heads, 63 sectors / track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk / dev / sdg: 1073 MB, 1073741824 bytes 255 heads, 63 sectors / track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk / dev / sdh: 1073 MB, 1073741824 bytes 255 heads, 63 sectors / track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk / dev / sdi: 1073 MB, 1073741824 bytes 255 heads, 63 sectors / track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk / dev / sdj: 1073 MB, 1073741824 bytes 255 heads, 63 sectors / track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical / physical): 512 bytes / 512 bytes I / O size (minimum / optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
The command listed all disks present on the node . Let’s partition only those that are not partitioned.
[root@clusterware1 ~] # fdisk /dev/sde The device contains neither a valid DOS partition table nor a Sun, OSF, or SGI disk label. Building a new DOS disklabel with disk identifier 0x6952baca. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content will not be recoverable. Warning: Invalid 0x0000 option of partition table 4 will be rectified by write (w) WARNING: DOS-compatible mode is deprecated. It's strongly recommended to Switch off the mode (command 'c') and change display units to Sectors (command 'u'). Command (m for help): n Command - action Is extended P primary partition (1-4) P Partition number (1-4): 1 First cylinder (1-130, default 1): Using default value 1 Last cylinder, + cylinders or + size {K, M, G} (1-130, default 130): Using default value 130 Command (m for help): w The partition table has changed! Calling ioctl () to reread partition table. Synchronizing disks.
Do the same for others.
After you finish, run the “fdisk -l” command again to see if all the disks are partitioned.
Now let’s create the disks in ASM.
[root@clusterware1 ~] # oracleasm createdisk VD1 /dev/sde1 Writing disk header: done Instantiating disk: done [root@clusterware1 ~] # oracleasm createdisk VD2 /dev/sdf1 Writing disk header: done Instantiating disk: done [root@clusterware1 ~] # oracleasm createdisk VD3 /dev/sdg1 Writing disk header: done Instantiating disk: done [root@clusterware1 ~] # oracleasm createdisk OCR1 /dev/sdh1 Writing disk header: done Instantiating disk: done [root@clusterware1 ~] # oracleasm createdisk OCR2 /dev/sdi1 Writing disk header: done Instantiating disk: done [root@clusterware1 ~] # oracleasm createdisk OCR3 /dev/sdj1 Writing disk header: done Instantiating disk: done
[root@clusterware1 ~] # oracleasm listdisks CONFIG1 CONFIG2 CONFIG3 OCR1 OCR2 OCR3 VD1 VD2 VD3
Start the other node and verify that ASM has identified the disks.
[root@ lusterware2 ~] # oracleasm listdisks CONFIG1 CONFIG2 CONFIG3 OCR1 OCR2 OCR3 VD1 VD2 VD3
Let’s now check the path of the disks and create the OCR and VD diskgroups .
[root@clusterware2 ~] # su - oracle [oracle@clusterware2 ~] $. Oraenv ORACLE_SID = [oracle]? + ASM2 The Oracle base has been set to /u01 /app /oracle [oracle@clusterware2 ~] $ sqlplus /as sysasm SQL * Plus: Release 11.2.0.4.0 Production on Sun Jan 24 23:01:10 2016 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> column path format a45 SQL> set lines 200 SQL> set pages 500 NAME PATH HEADER_STATUS ------------------------------ -------------------- ---------------------------- CONFIG_0000 /dev/oracleasm/disks/CONFIG1 PROVISIONED CONFIG_0001 /dev/oracleasm/disks/CONFIG2 PROVISIONED CONFIG_0002 /dev/oracleasm/disks/CONFIG3 PROVISIONED /dev/oracleasm/disks/OCR1 PROVISIONED /dev/oracleasm/disks/OCR2 PROVISIONED /dev/oracleasm/disks/OCR3 PROVISIONED /dev/oracleasm/disks/VD1 PROVISIONED /dev/oracleasm/disks/VD2 PROVISIONED /dev/oracleasm/disks/VD3 PROVISIONED 9 rows selected. SQL> CREATE DISKGROUP VD NORMAL REDUNDANCY DISK '/dev/oracleasm/disks/VD1', '/dev/oracleasm/disks/VD2' '/dev/oracleasm/disks/VD3'; Diskgroup created. SQL> CREATE DISKGROUP OCR NORMAL REDUNDANCY DISK '/dev/oracleasm/disks/OCR1', '/dev/oracleasm/disks/OCR2' '/dev/oracleasm/disks/OCR3'; Diskgroup created.
Observer that the created diskgroups are running only on the node where we created them. Start and enable the newly created diskgroupson the other node .
[oracle@clusterware2 ~] $ srvctl status diskgroup -g OCR Disk Group OCR is running on clusterware2 [oracle@clusterware2 ~] $ srvctl status diskgroup -g VD Disk Group VD is running on clusterware2 [oracle@clusterware2 ~] $ srvctl start diskgroup -g OCR -n clusterware1 [oracle@clusterware2 ~] $ srvctl enable diskgroup -g OCR -n clusterware1 [oracle@clusterware2 ~] $ srvctl start diskgroup -g VD -n clusterware1 [oracle@clusterware2 ~] $ srvctl enable diskgroup -g VD -n clusterware1
Change the compatible attributes of diskgroups .
SQL> ALTER DISKGROUP OCR SET ATTRIBUTE 'compatible.asm' = '11 .2.0.0.0 '; Diskgroup altered. SQL> ALTER DISKGROUP OCR SET ATTRIBUTE 'compatible.rdbms' = '11 .2.0.0.0'; Diskgroup altered. SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.asm' = '11 .2.0.0.0 '; Diskgroup altered. SQL> ALTER DISKGROUP VD SET ATTRIBUTE 'compatible.rdbms' = '11 .2.0.0.0'; Diskgroup altered.
Let’s check the current Voting Disk situation.
[oracle@clusterware2 ~] $ su - Password: [root@clusterware2 ~] #. Oraenv ORACLE_SID = [root]? + ASM2 The Oracle base has been set to / u01 / app / oracle [root@clusterware2 ~] # crsctl query css ## STATE File Universal Id File Name Disk group Eur-lex.europa.eu eur-lex.europa.eu 1. ONLINE 02aa40fc09384f14bf26cef30fca02b9 (/dev /oracleasm/disks/CONFIG1) [CONFIG] 2. ONLINE 45b54bb316a84ff0bfec3c6faa4bc142 (/dev /oracleasm/disks/CONFIG2) [CONFIG] 3. ONLINE 47a0e02aa9434fd2bf9dcac8ae2f12e3 (/dev /oracleasm/disks/CONFIG3) [CONFIG] Located 3 voting disk (s).
Replace the current Voting Disk location with the VD diskgroup we created.
[root@clusterware2 ~] # crsctl replace votedisk + VD Successful addition of voting disk ae63f25b7bf24f69bf67ba85ea3897e5. Successful addition of voting disk 0d401ff1f3a74f0ebf358048d9ef0b93. Successful addition of voting disk 650707c4cf7f4f34bfd9c5f53a8d3d79. Successful deletion of voting disk 02aa40fc09384f14bf26cef30fca02b9. Successful deletion of voting disk 45b54bb316a84ff0bfec3c6faa4bc142. Successful deletion of voting disk 47a0e02aa9434fd2bf9dcac8ae2f12e3. Successfully replaced voting disk group with + VD. CRS-4266: Voting file (s) successfully replaced [root@clusterware2 ~] # crsctl query css ## STATE File Universal Id File Name Disk group Eur-lex.europa.eu eur-lex.europa.eu 1. ONLINE ae63f25b7bf24f69bf67ba85ea3897e5 (/dev /oracleasm/disks/VD1) [VD] 2. ONLINE 0d401ff1f3a74f0ebf358048d9ef0b93 (/dev /oracleasm/disks/VD2) [VD] 3. ONLINE 650707c4cf7f4f34bfd9c5f53a8d3d79 (/dev /oracleasm/disks/VD3) [VD] Located 3 voting disk (s).
Now we have Voting Disk in your exclusive diskgroup. Let’s do the procedure to leave the OCR in the same situation.
[root@clusterware2 ~] # ocrcheck Status of Oracle Cluster Registry is as follows: Version: 3 Total space (kbytes): 262120 Used space (kbytes): 2888 Available space (kbytes): 259232 ID: 628337282 Device/File Name: + CONFIG Device/File integrity check succeeded Device/File Name: + OCR Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded [root@clusterware2 ~] # ocrconfig -delete + CONFIG [root@clusterware2 ~] # ocrcheck Status of Oracle Cluster Registry is as follows: Version: 3 Total space (kbytes): 262120 Used space (kbytes): 2888 Available space (kbytes): 259232 ID: 628337282 Device/File Name: + OCR Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded
If we wanted to improve redundancy we could keep diskgroup CONFIG.
As a good practice, use ASM and storage redundancy for OCR and VD.
why do we need to set compatible.rdbms in the above case