Often in Oracle clustered database environments, a challenge can be finding somewhere to put common files – things like backups, scripts, logs etc and anything you might want to make accessible to all nodes in the cluster. This can also apply to non-RAC database environments (RAC one node or any single node installations of Grid Infrastructure) for convenience too.
This challenge gets extended when Oracle wants to perform backups and ASM appears to be the only viable (logically) place to dump backups – however of course you may have some utility (Netbackup for example) that doesn’t do well with ASM disk groups. A good solution is needed – one that leverages the existing high availability and clustered facilities of Oracle’s ASM, but can be exposed to the operating system and treated like any other mount point.
Enter Oracle Automatic Storage Management Cluster File System (ACFS for short) designed to fill this gap. Depending on the version you are using, it can support everything other than database files (11gR2) or even database files (12.1) with the exception of Oracle Restart (single node) datafiles. The list of things it can and cannot do by version are available here:
The architecture of ACFS looks like this (regardless of version):

So, enough of the marketing – how do we install it? For my example, I’m using a RedHat Linux 6.5 server with the UEK3 kernel (actually a VirtualBox machine) that is running an existing 11.2.0.4 database on filesystem and for the purpose of this exercise, I have just installed Oracle Grid Infrastructure 11.2.0.4 to go with it. In order to get this to work with Oracle ASM, I’ve used VBoxManage to add disks and formatted them with parted to become ASM candidates.
Prerequisites
First things first. You can go through this exercise and get to the point when you attempt to initialize ACFS only to find it won’t work because it’s not compatible…. What? Well, it’s only compatible after you patch Oracle Grid Infrastructure specifically to address this. For this you will need the following patches:
- 6880880
- 16318126
I hope you will recognize the number for that first one. If so, great. If not, you might not enjoy this part. Install the OPatch update as usual. Generate the response file and use opatch auto as root to install the UEK compatibility patch.
Oracle ASM setup
Create an ASM disk group to hold the ACFS volume you wish to create. Bear in mind you must specify the compatible attributes.
[oracle@ora-node-2 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 22 13:17:40 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Automatic Storage Management option SQL> create diskgroup backup 2 external redundancy 3 disk '/dev/asm-disk2' name backup 4 ATTRIBUTE 5 'compatible.asm' = '11.2', 6* 'compatible.rdbms' = '11.2' SQL> / Diskgroup created.
ACFS Preparation
As root, run the ACFS initialization command. This is the part that will fail without patching.
[oracle@ora-node-2 ~]$ su - Password: [root@ora-node-2 ~]# [root@ora-node-2 ~]# cd /opt/app/oracle/product/11.2.0/grid/bin [root@ora-node-2 bin]# ./acfsroot install ACFS-9300: ADVM/ACFS distribution files found. ACFS-9312: Existing ADVM/ACFS installation detected. ACFS-9314: Removing previous ADVM/ACFS installation. ACFS-9315: Previous ADVM/ACFS components successfully removed. ACFS-9307: Installing requested ADVM/ACFS software. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9321: Creating udev for ADVM/ACFS. ACFS-9323: Creating module dependencies - this may take some time. ACFS-9154: Loading 'oracleoks.ko' driver. ACFS-9154: Loading 'oracleadvm.ko' driver. ACFS-9154: Loading 'oracleacfs.ko' driver. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'. ACFS-9156: Detecting control device '/dev/ofsctl'. ACFS-9309: ADVM/ACFS installation correctness verified.
For grins, make sure the ACFS processes have started.
[root@ora-node-2 bin]# ps -ef|grep acfs root 19410 2 0 18:22 ? 00:00:00 [acfsioerrlog] root 19411 2 0 18:22 ? 00:00:00 [acfs_bast0] root 19412 2 0 18:22 ? 00:00:00 [acfs_bast1] root 19413 2 0 18:22 ? 00:00:00 [acfs_bast2] root 19414 2 0 18:22 ? 00:00:00 [acfs_bast3] root 19415 2 0 18:22 ? 00:00:00 [acfs_bast4] root 19416 2 0 18:22 ? 00:00:00 [acfs_bast5] root 19417 2 0 18:22 ? 00:00:00 [acfs_bast6] root 19418 2 0 18:22 ? 00:00:00 [acfs_bast7] root 19504 19356 1 18:26 pts/2 00:00:00 grep acfs
Next, load the ACFS driver.
[root@ora-node-2 bin]# ./acfsload start ACFS-9391: Checking for existing ADVM/ACFS installation. ACFS-9392: Validating ADVM/ACFS installation files for operating system. ACFS-9393: Verifying ASM Administrator setup. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'. ACFS-9156: Detecting control device '/dev/ofsctl'. ACFS-9322: completed
Create your destination mount point directory.
[root@ora-node-2 bin]# mkdir /opt/app/rman_backups [root@ora-node-2 bin]# chown oracle:oinstall /opt/app/rman_backups [root@ora-node-2 bin]# chmod 775 /opt/app/rman_backups
Oracle ASM-ACFS configuration
Back as Oracle, create the ACFS volume within the disk group.
[root@ora-node-2 bin]# su - oracle [oracle@ora-node-2 ~]$ . oraenv ORACLE_SID = [SECU] ? +ASM The Oracle base remains unchanged with value /orab/app/oracle
[oracle@ora-node-2 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 22 19:37:45 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Automatic Storage Management option SQL> alter diskgroup backup add volume acfsvol1 size 3G; Diskgroup altered. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Automatic Storage Management option
So what did that do? Let’s take a look.
[oracle@ora-node-2 ~]$ ls -ltr /dev/asm/* brwxrwx---. 1 root oinstall 251, 40449 Oct 22 19:38 /dev/asm/acfsvol1-79 [oracle@ora-node-2 ~]$ exit logout
ACFS Volume Preparation
In order to use this volume we just created, it has to have a filesystem created on it and added to the ACFS cluster registry. Note that the user is now root.
[root@ora-node-2 bin]# /sbin/mkfs -t acfs -b 4k /dev/asm/acfsvol1-79 -n "acfsvol1" mkfs.acfs: version = 11.2.0.4.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/acfsvol1-79 mkfs.acfs: volume size = 3221225472 mkfs.acfs: Format complete. [root@ora-node-2 bin]# /sbin/acfsutil registry -f -a \ /dev/asm/acfsvol1-79 /opt/app/rman_backups acfsutil registry: mount point /opt/app/rman_backups successfully added to Oracle Registry
ACFS Volume Mount
So, let’s mount that shiny new volume.
[root@ora-node-2 bin]# mount -t acfs /dev/asm/acfsvol1-79 /opt/app/rman_backups/
[root@ora-node-2 bin]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_oranode1-lv_root 45G 40G 2.9G 94% / tmpfs 498M 91M 407M 19% /dev/shm /dev/sda1 477M 108M 341M 24% /boot /dev/sdb1 112G 72G 35G 68% /opt/app /dev/asm/acfsvol1-79 3.0G 45M 3.0G 2% /opt/app/rman_backups
Startup Preparation
May as well add this to our server startup scripts.
# echo "/opt/app/oracle/product/11.2.0/grid/bin/acfsload start" >> /etc/rc.local # echo "/sbin/mount.acfs -o all" >> /etc/rc.local #
Reboot and….
You should now see the ACFS processes are running – and that your filesystem has been mounted.
[root@ora-node-2 ~]# ps -ef|grep acf root 2422 2 0 19:49 ? 00:00:00 [acfsioerrlog] root 2423 2 0 19:49 ? 00:00:00 [acfs_bast0] root 2424 2 0 19:49 ? 00:00:00 [acfs_bast1] root 2425 2 0 19:49 ? 00:00:00 [acfs_bast2] root 2426 2 0 19:49 ? 00:00:00 [acfs_bast3] root 2427 2 0 19:49 ? 00:00:00 [acfs_bast4] root 2428 2 0 19:49 ? 00:00:00 [acfs_bast5] root 2429 2 0 19:49 ? 00:00:00 [acfs_bast6] root 2430 2 0 19:49 ? 00:00:00 [acfs_bast7] root 2875 2852 1 19:55 pts/0 00:00:00 grep acf [root@ora-node-2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_oranode1-lv_root 45G 40G 2.9G 94% / tmpfs 498M 90M 409M 18% /dev/shm /dev/sda1 477M 108M 341M 24% /boot /dev/sdb1 112G 72G 35G 68% /opt/app /dev/asm/acfsvol1-79 3.0G 45M 3.0G 2% /opt/app/rman_backups
Finally
For a RAC cluster, you have to do some of the following on every other node:
- Mount the ASM disk group
- Enable the ACFS Volume
- Start the ACFS processes
- Create your destination mount points and change their ownership/permissions
- Mount the ACFS volume.
Trivia Question
Why don’t we just add the ACFS filesystem definition to /etc/fstab so it mounts at boot time automatically?
Answers on a post card.
hi stephen nice post I worked with you at TUSC best regards Nick Stachniak