I must thank my fellow DBA Franky Weber Faust for his publication in Portuguese OTN.

Objective: Explain and demonstrate the use of ACFS for storing Oracle Home binaries and the database itself, registering the ACFS filesystem to start automatically, configuring the logs to be generated in a unified directory, and displaying ACFS behavior with the Flex ASM architecture.

In the first article, I approached the benefits, some of the architecture of ACFS and also demonstrated how to configure it. In this we will see how to install and create a database in ACFS, how to register ACFS to automatically start and mount its filesystems, how to unify the logs and traces and also how ACFS works in the Flex ASM architecture.

For this article not to be so extensive I will address the following topics:

  • ACFSUtil to mount ACFS;
  • Shared Oracle Home;
  • Database on ACFS;
  • Unifying diagnostic files;
  • ACFS on Flex ASM Architecture.

ACFSUtil to mount ACFS

I finished the previous article by mounting ACFS with the mount command of Linux itself and passing the “-t acfs” parameter. I will demonstrate here the use of the ACFS utility to mount and register the assembly of Filesystems ACFS automatically at the boot of Clusterware.

Run the command below to register the ACFS filesystem:

[root @ srv1 ~] # acfsutil registry -a /dev/asm/advmvol1-301 /mnt/advm/fs1
Acfsutil registry: mount point /mnt/advm/fs1 successfully added to Oracle Registry

If we try to do the same for our EXT4 volume see what happens:

[Root @ srv1 ~] # acfsutil registry -a /dev/asm/advmvol2-301 /mnt/advm/fs2
Acfsutil registry: mount point /mnt/advm/fs2 successfully added to Oracle Registry
CRS-5016: Process "/bin/mount" spawned by agent "ORAROOTAGENT" for action "start" failed:
Details at "(: CLSN00010 :)" in "/u01/app/oracle/diag/crs/srv2/crs/trace/crsd_orarootagent_root.trc"
Error mounting file '/dev/asm/advmvol2-301' on '/mnt/advm/fs2': mount.acfs: ACFS-00591:
Error found in volume disk header
Mount.acfs: ACFS-02037: File system not created on a Linux system. Can not mount.

CRS-5016: Process "/bin/mount" spawned by agent "ORAROOTAGENT" for action "start" failed:
Details at "(: CLSN00010 :)" in "/u01/app/oracle/diag/crs/srv1/crs/trace/crsd_orarootagent_root.trc"
Error mounting file system '/dev/asm/advmvol2-301'
On '/mnt/advm/fs2': mount.acfs: ACFS-00591: error found in volume disk header
Mount.acfs: ACFS-02037: File system not created on a Linux system. Can not mount.

CRS-2674: Start of 'ora.advm.advmvol2.acfs' on 'srv1' failed
CRS-2674: Start of 'ora.advm.advmvol2.acfs' on 'srv2' failed

See the errors, ACFS complains that it is an EXT4 filesystem. So we set up the right way for an EXT4:

[root @ srv1 ~] # mount /dev/asm/advmvol2-301 /mnt/advm/fs2

We will reboot the servers to see if the filesystems are back mounted:

[root @ srv1 ~] # reboot

Broadcast message from root@srv1.localdomain
(/dev/pts/1) at 19:06 ...

The system is going down to reboot NOW!
[root @ srv1 ~] # Connection to 192.168.2.124 closed by remote host.
Connection to 192.168.2.124 closed.
Frankys-MacBook-Pro: ~ frankyweber $ ssh root@192.168.2.124
Root@192.168.2.124's password:
Last login: Sat Nov 12 18:46:49 2016 from 192.168.2.101
[root @ srv1 ~] # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_srv1-lv_root
50G 21G 26G 45% /
tmpfs 2.0G 1.2G 755M 62% /dev/shm
/dev/sda1 485M 78M 382M 17% /boot
/dev/mapper/vg_srv1-lv_home
45G 199M 43G 1% /home
/dev/asm/advmvol1-301
512M 206M 307M 41% /mnt/advm/fs1

Note that the ACFS filesystem is mounted correctly after booting, but for the EXT4 filesystem we need to create some script to mount it automatically after ASM and ADVM start, because we can not include it in the / etc / fstab file due to This file will be read when the server boots when ASM is not yet available.

hared Oracle Home

Now I will demonstrate how to install your database on a shared Oracle Home and then how to create your database on the same filesytem.

For this activity, I will add new disks and create a new disk group, because for some activities that will be presented we will need a bigger disk group.

I’m going to skip the part of partitioning the disk and adding it to ASM and only presenting what matters, so with the disk already presented to the server and created in ASM Library, I will create a new disk group. This time I will perform the procedure via ASMCA for you to observe that it is also possible to work with GUI.

Click “Create” to create a new disk group:

Give a name to your disk group, in my case I put the name “DB”, so change the compatibility parameters by accessing the advanced options for at least “12.1.0.0.0”, this way we will have access to the advanced functionality of ACFS that I go Presented in the next article.If you want to give a name to the disk you are using to create the new disk group, select the type of redundancy, in my case I am choosing “external” because I do not want redundancy in ASM for this disk group. The “Allocation Unit” can be left with the standard size of 1MB in this scenario. Then after everything is set just click “OK”, you should receive a message that the disk group was created successfully.

 

We have the new disk group created.
Now let’s go to asmcmd to create our new volume.

[oracle @ srv1 ~] $ asmcmd -p
ASMCMD [+]> volcreate -G DB -s 12G DBVOL1
ASMCMD [+]> volinfo -G DB -a
Diskgroup Name: DB

Volume Name: DBVOL1
Volume Device: /dev/asm/dbvol1-423
State: ENABLED
Size (MB): 12288
Resize Unit (MB): 64
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

With the volume DBVOL1 created now we will create a directory for our Shared Oracle Home and remember to create it on all servers where this directory will be shared, in my cluster I have only 2 nodes, so I will create in 2:

[oracle@srv1 ~] $ mkdir -p /u01/app/oracle/product/12.1.0/shared_dbhome
[oracle@srv2 ~] $ mkdir -p /u01/app/oracle/product/12.1.0/shared_dbhome

Now let’s go back to ASMCA to set up our new ACFS.
We accessed the “ASM Cluster File Systems” tab in ASMCA to create our filesystem.

 

Click “Create” and fill in the fields according to your environment.

If you click on “Show Command” the required command sequence will be displayed. This will be the script generated when we also click on “OK”:

Create ACFS Command:

/sbin/mkfs -t acfs /dev/asm/dbvol1-423

Following commands should be run as privileged user:

/u01/app/12.1.0/grid/bin/srvctl add filesystem -d /dev/asm/dbvol1-423 -m /u01/app/oracle/product/12.1.0/shared_dbhome
-u oracle -fstype ACFS -description 'Shared Oracle Home' -autostart ALWAYS

/u01/app/12.1.0/grid/bin/srvctl start filesystem -d /dev/asm/dbvol1-423

chown oracle: oinstall /u01/app/oracle/product/12.1.0/shared_dbhome

chmod 775 /u01/app/oracle/product/12.1.0/shared_dbhome

Click “OK.”

Run the ASMCA-generated script as root:

[root@srv1 ~] # /u01/app/oracle/cfgtoollogs/asmca/scripts/acfs_script.sh
ACFS file system /u01/app/oracle/product/12.1.0/shared_dbhome is mounted on nodes srv1, srv2
[root@srv1 ~] # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_srv1-lv_root
50G 21G 26G 45% /
tmpfs 2.0G 1.2G 759M 62% /dev/shm
/dev/sda1 485M 78M 382M 17% /boot
/dev/mapper/vg_srv1-lv_home
45G 199M 43G 1% / home
/dev/asm/advmvol1-301
512M 206M 307M 41% /mnt/advm/fs1
/dev/asm/dbvol1-423 12G 101M 12G 1% /u01/app/oracle/product/12.1.0/shared_dbhome

[oracle @ srv2 ~] $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_srv1-lv_root
50G 21G 27G 45% /
tmpfs 2.0G 1.3G 751M 63% /dev/shm
/dev/sda1 485M 78M 382M 17% /boot
/dev/mapper/vg_srv1-lv_home
45G 181M 43G 1% /home
/dev/asm/advmvol1-301
512M 206M 307M 41% /mnt/advm/fs1
/dev/asm/dbvol1-423 12G 101M 12G 1% /u01/app/oracle/product/12.1.0/shared_dbhome

Now we just need to install our software on this shared Oracle Home and then create our new database.

I will only show here what changes from a normal installation to Shared Oracle Home, because this is not an article about installing Oracle Database itself but about the functionality of ACFS.

 

 

In this screen shown below you must insert the directory created in ACFS. This is the only difference in installing Oracle Database software.

 

 

If a message like the one below appears, make sure there is nothing inside the directory and if there is only the lost + found directory of Linux, then click “Yes”.

 

 

 

Once this is done, proceed to your installation until it finishes. This way you will have your shared binaries and also your entire Oracle Home.

Database on ACFS

Let’s then create our database. I’ll create it inside this same filesystem, but you can create a new filesystem if you want. I will use the DBCA to create the database to facilitate the demonstration; Again I will only present what changes in the creation of the database on ACFS.

Create the directory where the database will be stored and also the Fast Recovery Area:

 

[oracle @ srv2 ~] $ mkdir /u01/app/oracle/product/12.1.0/shared_dbhome/oradata
[oracle @ srv2 ~] $ mkdir /u01/app/oracle/product/12.1.0/shared_dbhome/fra

Since the filesystem is now shared it is only necessary to create the directories from one of the nodes. It is a single filesystem presented to all nodes in my cluster.

 

Here’s how the configuration was in DBCA:

Now just proceed with the installation to the end and wait for it to complete.
This way your database will be stored on an ACFS filesystem. Note that we only point to the directory where the database files are stored and in the “drop-down box” we select “File System” instead of the “ASM” standard.
This type of implementation is quite common for SAP application databases. SAP itself needs shared directories on database servers when we have a cluster.

Let’s check how our database was and where the files were stored:

[oracle@srv1 ~] $ .oraenv
ORACLE_SID = [oracle]? orcl1
ORACLE_HOME = [/home/oracle]? /u01/app/oracle/product/12.1.0/shared_dbhome
The Oracle base has been set to /u01/app/oracle
[oracle@srv1 ~] $ sqlplus / as sysdba

SQL * Plus: Release 12.1.0.2.0 Production on Tue Nov 22 19:54:56 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
And Real Application Testing options

SQL> set pages 200 lin 200
SQL> select name from v$datafile;

NAME
-------------------------------------------------- -------------------------------------------------- ---------
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/datafile/o1_mf_system_d36wdy95_.dbf
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/datafile/o1_mf_sysaux_d36wf0mx_.dbf
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/datafile/o1_mf_undotbs1_d36wf1j1_.dbf
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/datafile/o1_mf_undotbs2_d36wf8r3_.dbf
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/datafile/o1_mf_users_d36wf946_.dbf

SQL> select name from v$controlfile;

NAME
-------------------------------------------------- -------------------------------------------------- ---------
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/controlfile/o1_mf_d36wdxcg_.ctl
/u01/app/oracle/product/12.1.0/shared_dbhome/fra/ORCL/controlfile/o1_mf_d36wdxh4_.ctl

SQL> select member from v$logfile;

MEMBER
-------------------------------------------------- -------------------------------------------------- ---------
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/onlinelog/o1_mf_1_d36wdxnb_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/fra/ORCL/onlinelog/o1_mf_1_d36wdxrd_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/onlinelog/o1_mf_3_d36wdxwm_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/fra/ORCL/onlinelog/o1_mf_3_d36wdy13_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/onlinelog/o1_mf_2_d36x5x3o_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/fra/ORCL/onlinelog/o1_mf_2_d36x5x7c_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/oradata/ORCL/onlinelog/o1_mf_4_d36x5xbt_.log
/u01/app/oracle/product/12.1.0/shared_dbhome/fra/ORCL/onlinelog/o1_mf_4_d36x5xgf_.log

8 rows selected.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
And Real Application Testing options


[oracle @ srv1 ~] $ srvctl config database -db orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/12.1.0/shared_dbhome
Oracle user: oracle
Spfile: /u01/app/oracle/product/12.1.0/shared_dbhome/oradata/orcl/spfileorcl.ora
Password file: /u01/app/oracle/product/12.1.0/shared_dbhome/oradata/orcl/orapworcl
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths: /u01/app/oracle/product/12.1.0/shared_dbhome
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: orcl1, orcl2
Configured nodes: srv1, srv2
Database is administrator managed

 

See that the datafiles, controlfiles and logfiles were all stored in our ACFS and included the spfile and also the passwordfile.

Unifying diagnostic files

All okay up here, but the diagnostic directories are still missing, right? Where are the logs and traces going? Let’s check:

 

SQL> col value for a70
SQL> select inst_id, name, value from gv $ diag_info;
   INST_ID NAME                                          VALUE
---------- --------------------------------------------  --------------------------------------------------------
1 Diag Enabled                                           TRUE
1 ADR Base                                               /u01/app/oracle
1 ADR Home                                               /u01/app/oracle/diag/rdbms/orcl/orcl1
1 Diag Trace                                             /u01/app/oracle/diag/rdbms/orcl/orcl1/trace
1 Diag Alert                                             /u01/app/oracle/diag/rdbms/orcl/orcl1/alert
1 Diag Incident                                          /u01/app/oracle/diag/rdbms/orcl/orcl1/incident
1 Diag Cdump                                             /u01/app/oracle/diag/rdbms/orcl/orcl1/cdump
1 Health Monitor                                         /u01/app/oracle/diag/rdbms/orcl/orcl1/hm
1 Active Problem Count                                   0
1 Active Incident Count                                  0
2 Diag Enabled                                           TRUE
2 ADR Base                                               /u01/app/oracle
2 ADR Home                                               /u01/app/oracle/diag/rdbms/orcl/orcl2
2 Diag Trace                                             /u01/app/oracle/diag/rdbms/orcl/orcl2/trace
2 Diag Alert                                             /u01/app/oracle/diag/rdbms/orcl/orcl2/alert
2 Diag Incident                                          /u01/app/oracle/diag/rdbms/orcl/orcl2/incident
2 Diag Cdump                                             /u01/app/oracle/diag/rdbms/orcl/orcl2/cdump
2 Health Monitor                                         /u01/app/oracle/diag/rdbms/orcl/orcl2/hm
2 Active Problem Count                                   0
2 Active Incident Count                                  0

20 rows selected.

You can see that Base ADR is pointing to Oracle Base, which is the default destination for these files. So let’s change this and make them unified in our ACFS:

SQL> sho parameter diag


NAME                            TYPE   VALUE
------------------------------------ ----------- ------------------------------
diagnostic_dest                 string  /u01/app/oracle


SQL> alter system set diagnostic_dest='/u01/app/oracle/product/12.1.0/shared_dbhome' sid='*';


System altered.


SQL> select inst_id, name, value from gv$diag_info;


   INST_ID NAME                             VALUE
---------- -------------------------------- -------------------------------------------------------------------------- 
1 Diag Enabled                              TRUE
1 ADR Base                                  /u01/app/oracle/product/12.1.0/shared_dbhome
1 ADR Home                                  /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl1
1 Diag Trace                                /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl1/trace
1 Diag Alert                                /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl1/alert
1 Diag Incident                             /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl1/incident
1 Diag Cdump                                /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl1/cdump
1 Health Monitor                            /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl1/hm
1 Active Problem Count                      0
1 Active Incident Count                     0
2 Diag Enabled                              TRUE
2 ADR Base                                  /u01/app/oracle/product/12.1.0/shared_dbhome
2 ADR Home                                  /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl2
2 Diag Trace                                /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl2/trace
2 Diag Alert                                /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl2/alert
2 Diag Incident                             /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl2/incident
2 Diag Cdump                                /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl2/cdump
2 Health Monitor                            /u01/app/oracle/product/12.1.0/shared_dbhome/diag/rdbms/orcl/orcl2/hm
2 Active Problem Count                      0
2 Active Incident Count                     0


20 rows selected.

Ready. Logs and traces all unified in our ACFS filesystem.

ACFS on Flex ASM Architecture

In version 12c of the Grid Infrastructure, when we use Flex ASM we benefit from the ACFS mounted on the diskgroup in question, if there is a failure of the local ASM instance where an ACFS resides the connection is directed to some surviving ASM instance on another node through the ASM Proxy And ASM Network. The same flow occurs in case there is no local ASM instance, including this is the purpose of Flex ASM, to decrease the number of ASM instances in a cluster, thus reducing the amount of resources needed to maintain a Grid Infrastructure environment. If ACFS is not in use the ASM Proxy instance can be turned off, but remember that for the database stored on ACFS to keep online the ACFS resource must be available and in case of a drop in this feature or even in the instance ASM Proxy the ACFS will be affected and consequently the database stored in it.

This image briefly presents the architecture of ACFS in a cluster with Flex ASM:

 

 

Note below where the +APX2 instance is receiving the metadata for ACFS:

SQL> select distinct i.host_name asm_host, i.instance_name asm_instance, a.instance_name asm_client,
A.status from gv $ instance i, gv $ asm_client a where i.inst_id = a.inst_id order by 1;

ASM_HOST ASM_INSTANCE ASM_CLIENT STATUS
-------------------- ----------- --- ---------
Srv1.localdomain +ASM1 +APX1 CONNECTED
Srv1.localdomain +ASM1 +ASM1 CONNECTED
Srv1.localdomain +ASM1 -MGMTDB CONNECTED
Srv2.localdomain +ASM2 +APX2 CONNECTED

When stopping the + ASM2 instance, see who will provide the metadata for the server’s ACFS srv2, whose ASM Proxy is the + APX2 instance:

[root @ srv1 ~] # srvctl stop asm -n srv2 -f

SQL> select distinct i.host_name asm_host, i.instance_name asm_instance, a.instance_name asm_client,
A.status from gv $ instance i, gv $ asm_client a where i.inst_id = a.inst_id order by 1;

ASM_HOST ASM_INSTANCE ASM_CLIENT STATUS
-------------------- ----------- --- ---------
Srv1.localdomain +ASM1 +APX1 CONNECTED
Srv1.localdomain +ASM1 +APX2 CONNECTED
Srv1.localdomain +ASM1 +ASM1 CONNECTED
Srv1.localdomain +ASM1 -MGMTDB CONNECTED

Note that the + ASM1 instance of server srv1 assumed the APX2 + instance as client, ie, there was a failover in this demonstration and throughout this period the database was kept online on both servers.

In this scenario we have seen how to install the Oracle Database software and also how to create a database, both on ACFS. This type of configuration is quite common in some environments and as I mentioned in part I of this content the Oracle Database Appliance from version 12.1.2.0.0 stores the database on ACFS filesystems. I also showed at the beginning of this article how to register ACFS for automatic assembly in the cluster startup, and finally I presented a little about how ACFS works in the new Flex ASM architecture and how to configure the logs and traces to be generated in a unified directory. Wait for the next related content.

References:
Https://docs.oracle.com/cd/E11882_01/server.112/e18951/whatsnew.htm#OSTMG94051
Http://www.oracle.com/technetwork/database/database-technologies/cloud-storage/cloudfs-datasheet-279846.pdf
Http://www.oracle.com/technetwork/database/database-technologies/cloud-storage/cloudfs-overview-wp-279856.pdf
Http://www.oracle.com/technetwork/products/cloud-storage/cloudfs-12c-overview-wp-1965426.pdf
Https://docs.oracle.com/database/121/OSTMG/GUID-C91D3A8E-F329-44BE-97B7-8AB41897D8E1.htm#OSTMG31000
Https://docs.oracle.com/database/121/DBLIC/editions.htm#CIHDDJCJ
Http://www.oracle.com/technetwork/database/database-technologies/cloud-storage/benefits-of-oracle-acfs-2379064.pdf

About The Author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.