High Availability Network File Storage for Oracle Grid Infrastructure

High Availability Network File Storage (NFS) for Oracle Grid Infrastructure provides uninterrupted service of NFS V2/V3/V4 exported paths by exposing NFS exports on Highly Available Virtual IPs (HAVIP) and using Oracle Clusterware agents to ensure that the HAVIPs and NFS exports are always online. While base NFS supports file locking, HANFS does not support NFS file locking.

Note:

  • This functionality relies on a working NFS server configuration available on the host computer. You must configure the NFS server before attempting to use the Oracle ACFS NFS export functionality.

  • This functionality is not available on Windows.

  • This functionality is not supported in Oracle Restart configurations.

  • The HAVIP cannot be started until at least one file system export resource has been created for it.

To set up High Availability NFS for Oracle Grid Infrastructure, perform the following steps:

  1. Add and register a new HAVIP resource.

    For example:

    # srvctl add havip -id hrexports -address my_havip_name 
    

    In the example, my_havip_name is mapped in the domain name server (DNS) to the VIP address and is used by the client systems when mounting the file system.

    The initial processing of srvctl add havip ensures that:

    • The address being used is static, not dynamic

    • Any DNS names resolve to only one host, not round-robin multiple DNS resolutions

    • The network resource and provided IP address and resolved name are in the same subnet

    • The name is not in use

    SRVCTL creates the appropriate HAVIP name using the id, ensuring it is unique. As a final validation step, SRVCTL ensures that the network resource (if provided) of ora.net#.network exists. After this step, SRVCTL adds a new havip of type ora.havip.type with the name of ora.id.havip. In this example, the name is ora.hrexports.havip.

    Next SRVCTL modifies HAVIP start dependencies, such as active dispersion; sets the stop dependencies; and ensures the description attribute (if provided) is appropriately set.

  2. Create a shared Oracle ACFS file system.

    High Availability NFS for Oracle Grid Infrastructure operates only with Oracle ACFS file systems configured for clusterwide accessibility and does not support Oracle ACFS file systems configured for access on particular subsets of cluster nodes. High Availability NFS is not supported with non-Oracle ACFS file systems.

    For information on creating an Oracle ACFS file system, refer to "Creating an Oracle ACFS File System".

  3. Register the Oracle ACFS file system.

    For example:

    $ srvctl add filesystem -device /dev/asm/d1volume1-295 -volume VOLUME1 \
      -diskgroup HR_DATA -mountpath /oracle/cluster1/acfs1
    

    See Also:

    Oracle Real Application Clusters Administration and Deployment Guide for information about the srvctl add filesystem command

  4. Create an Oracle ACFS file system export resource.

    For example:

    # srvctl add exportfs -id hrexports -path /oracle/cluster1/acfs1 \
      -name hrexport1
    

    After the file system export resource has been created, then you can start the HAVIP created in step 1 to export the file system using the srvctl start havip command.

    The NFS mount option FSID is added to any export options, in the range of one billion or higher to minimize potential collisions with other FSIDs that are set on the server. This FSID option provides for reliable fail over between nodes and allows the usage of snapshot mounting.

    The default mount and export options for configured exports are the defaults for the NFS server.

    Relative paths that are fully-qualified are converted to absolute paths. Relative paths that are not fully-qualified are not accepted as an export path.

    HAVIPs attempts to find the best server to run on based on available file systems and other running HAVIPs, but this dispersion only occurs during CSS membership change events, such as a node joining or leaving the cluster.

    Note:

    It is not recommended to start and stop exports individually; this functionality should be provided through the start and stop operations of HAVIP.

    When HAVIP is not running, exports can exist on different nodes. After the associated HAVIP is started, the exports gather on a single node.

    Clients that are using an export that is stopped while HAVIP is running raise the NFS error estale, and must dismount and remount the file system.

See Also: