6 Configuring Storage for Oracle Grid Infrastructure and Oracle RAC

This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.

Note:

If you are currently using OCFS for Windows as your shared storage, then you must migrate to using Oracle ASM before the upgrade of Oracle Database and Oracle Grid Infrastructure.

6.1 Reviewing Oracle Grid Infrastructure Storage Options

This section describes the supported storage options for Oracle Grid Infrastructure for a cluster, and for features running on Oracle Grid Infrastructure.

See Also:

The Certification page in My Oracle Support for the most current information about certified storage options:
https://support.oracle.com

6.1.1 Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC

Both Oracle Clusterware and the Oracle RAC database use files that must be available to all the nodes in the cluster. The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.

Table 6-1 Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Home Directories

Storage Option OCR and Voting Files Oracle Grid Infrastructure Home Oracle RAC Home Oracle RAC Database Files Oracle Recovery Files

Oracle Automatic Storage Management (Oracle ASM)

Yes

No

No

Yes

Yes

Oracle Automatic Storage Management Cluster File System (Oracle ACFS)

No

No

Yes

Yes (Oracle Database 12c Release 12.1.0.2 and later)

Yes (Oracle Database 12c Release 12.1.0.2 and later)

Direct NFS Client access to a certified network attached storage (NAS) filer

Note: NFS or Direct NFS Client cannot be used for Oracle Clusterware files.

No

No

No

Yes

Yes

Shared disk partitions (raw devices)

No

No

No

No

No

Local file system (NTFS formatted disk)

No

Yes

Yes

No

No


6.1.2 General Storage Considerations for Oracle Grid Infrastructure

Oracle Clusterware uses voting files to monitor cluster node status, and the Oracle Cluster Registry (OCR) is a file that contains the configuration information and status of the cluster. The installer automatically initializes the OCR during the Oracle Clusterware installation. Oracle Database Configuration Assistant (DBCA) uses the OCR for storing the configurations for the cluster databases that it creates.

Use the following guidelines when choosing storage options for the Oracle Clusterware files:

  • The Oracle Grid Infrastructure home (Grid home) cannot be stored on a shared file system; it must be installed on a local disk.

  • You can choose any combination of the supported storage options for each file type if you satisfy all requirements listed for the chosen storage options.

  • You can store Oracle Cluster Registry (OCR) and voting files in Oracle ASM disk groups. You can also store a backup of the OCR file in a disk group.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • For a storage option to meet high availability requirements, the files stored on the disk must be protected by data redundancy, so that if one or more disks fail, then the data stored on the failed disks can be recovered. This redundancy can be provided externally using Redundant Array of Independent Disks (RAID) devices, or logical volumes on multiple physical devices and implement the stripe-and-mirror- everything methodology, also known as SAME.

  • If you do not have a RAID devices or logical volumes to provide redundancy, then you can create additional copies, or mirrors, of the files on different file systems. If you choose to mirror the files, then you must provide disk space for additional OCR files and at least two additional voting files.

  • Each OCR location should be placed on a different disk.

  • For voting files, ensure that each file does not share any hardware device or disk, or other single point of failure with the other voting files. Any node that does not have available to it an absolute majority of voting files (more than half, or a quorum) configured will be restarted.

  • If you do not have a storage option that provides external file redundancy, then you must use Oracle ASM, or configure at least three voting file locations to provide redundancy.

  • Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure groups within a disk group. A quorum failure group, a special type of failure group, contains mirror copies of voting files when voting files are stored in normal or high redundancy disk groups.

6.1.2.1 Storage Requirements When Using Oracle ASM for Oracle Clusterware Files

Be aware of the following requirements and recommendations when using Oracle ASM to store the Oracle Clusterware files:

  • You can store Oracle Cluster Registry (OCR) and voting files in Oracle ASM disk groups. You can also store a backup of the OCR file in a disk group.

  • The Oracle ASM instance must be clustered and the disks available to all the nodes in the cluster. Any node that does not have access to an absolute majority of voting files (more than half) will be restarted.

  • To store the Oracle Clusterware files in an Oracle ASM disk group, the disk group compatibility must be at least 11.2.

    Note:

    If you are upgrading an Oracle ASM installation, then see Oracle Automatic Storage Management Administrator's Guide for more information about disk group compatibility.

6.1.3 General Storage Considerations for Oracle RAC

For all Oracle RAC installations, you must choose the shared storage options to use for Oracle Database files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

If you plan to configure automated backups, then you must also choose a shared storage option to use for recovery files (the fast recovery area). Oracle recommends that you choose Oracle ASM as the shared storage option for the database data files and recovery files. The shared storage option that you choose for recovery files can be the same as or different from the shared storage option that you choose for the data files.

If you do not use Oracle ASM, then Oracle recommends that you place the data files and the Fast Recovery Area in shared storage located outside of the Oracle home, in separate locations, so that a hardware failure does not affect availability.

Use the following guidelines when choosing storage options for the Oracle RAC files:

  • You can choose any combination of the supported storage options for each file type if you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Oracle ASM as the storage option for database and recovery files.

  • If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:

    • All the nodes in the cluster must have Oracle Clusterware and Oracle ASM 12c Release 1 (12.1) installed as part of an Oracle Grid Infrastructure for a cluster installation.

    • Any existing Oracle ASM instance on any node in the cluster is shut down before installing Oracle RAC or creating the Oracle RAC database.

  • For Standard Edition and Standard Edition 2 (SE2) Oracle RAC installations, Oracle ASM is the only supported shared storage option for database or recovery files. You must use Oracle ASM for the storage of Oracle RAC data files, online redo logs, archived redo logs, control files, server parameter file (SPFILE), and the fast recovery area.

6.1.4 About Oracle ACFS and Oracle ADVM

Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (Oracle ADVM) are the main components of Oracle Cloud File System (Oracle CloudFS).

Oracle ACFS extends Oracle ASM technology to support of all of your application data in both single instance and cluster configurations. Oracle ADVM provides volume management services and a standard disk device driver interface to clients. Oracle Automatic Storage Management Cluster File System communicates with Oracle ASM through the Oracle Automatic Storage Management Dynamic Volume Manager interface.

See Also:

6.2 Guidelines for Choosing a Shared Storage Option

You can choose any combination of the supported shared storage options for each file type if you satisfy all requirements listed for the chosen storage option.

6.2.1 Guidelines for Using Oracle ASM Disk Groups for Storage

During Oracle Grid Infrastructure installation, you can create one disk group. After the Oracle Grid Infrastructure installation, you can create additional disk groups using Oracle Automatic Storage Management Configuration Assistant (ASMCA), SQL*Plus, or Automatic Storage Management Command-Line Utility (ASMCMD). Note that with Oracle Database 11g Release 2 (11.2) and later releases, Oracle Database Configuration Assistant (DBCA) does not have the functionality to create disk groups for Oracle ASM.

If you install Oracle Database or Oracle RAC after you install Oracle Grid Infrastructure, then you can either use the same disk group for database files, OCR, and voting files, or you can use different disk groups. If you create multiple disk groups before installing Oracle RAC or before creating a database, then you can decide whether you want to:

  • Place the data files in the same disk group as the Oracle Clusterware files.

  • Use the same Oracle ASM disk group for data files and recovery files.

  • Use different disk groups for each file type.

If you create only one disk group for storage, then the OCR and voting files, database files, and recovery files are contained in the one disk group. If you create multiple disk groups for storage, then you can choose to place files in different disk groups.

See Also:

6.2.2 Guidelines for Using Direct Network File System (NFS) with Oracle RAC

Network attached storage (NAS) systems use a network file system (NFS) to access data. You can store Oracle RAC data files and recovery files on a supported NAS server using Direct NFS Client.

NFS file systems must be mounted and available over NFS mounts before you start the Oracle RAC installation. See your vendor documentation for NFS configuration and mounting information.

Note that the performance of Oracle Database software and the databases that use NFS storage depend on the performance of the network connection between the database server and the NAS device. For this reason, Oracle recommends that you connect the database server (or cluster node) to the NAS device using a private, dedicated, network connection, which should be Gigabit Ethernet or better.

6.2.3 Guidelines for Using Oracle ACFS with Oracle RAC

Oracle Automatic Storage Management Cluster File System (Oracle ACFS) provides a general purpose file system. Oracle ACFS extends Oracle ASM technology to support of all of your application data in both single instance and cluster configurations. Oracle ADVM provides volume management services and a standard disk device driver interface to clients. Oracle Automatic Storage Management Cluster File System communicates with Oracle ASM through the Oracle Automatic Storage Management Dynamic Volume Manager interface.

You can place the Oracle home for Oracle Database 12c Release 1 (12.1) software on Oracle ACFS, but you cannot place Oracle Clusterware files on Oracle ACFS.

Note the following about Oracle ACFS:

  • You cannot put Oracle Clusterware executable files or shared files on Oracle ACFS.

  • You must use a domain user when installing Oracle Grid Infrastructure if you plan to use Oracle ACFS.

  • Starting with Oracle Grid Infrastructure 12c Release 1 (12.1) for a cluster, creating Oracle data files on an Oracle ACFS file system is supported.

  • You can put Oracle Database binaries and administrative files (for example, trace files) on Oracle ACFS.

  • When creating Oracle ACFS file systems on Windows, log on as a Windows domain user. Also, when creating files in an Oracle ACFS file system on Windows, you should be logged in as a Windows domain user to ensure that the files are accessible by all nodes.

    When using a file system across cluster nodes, the best practice is to mount the file system using a domain user, to ensure that the security identifier is the same across cluster nodes. Windows security identifiers, which are used in defining access rights to files and directories, use information which identifies the user. Local users are only known in the context of the local node. Oracle ACFS uses this information during the first file system mount to set the default access rights to the file system.

6.2.4 Guidelines for Placing Recovery Files on a File System

If you choose to place the recovery files on a cluster file system, then use the following guidelines when deciding where to place them:

  • To prevent disk failure from making the database files and the recovery files unavailable, place the recovery files on a cluster file system that is on a different physical disk from the database files.

    Note:

    Alternatively use an Oracle ASM disk group with a normal or high redundancy level for data files, recovery files, or both file types, or use external redundancy.
  • The cluster file system that you choose should have at least 3 GB of free disk space.

    The disk space requirement is the default disk quota configured for the fast recovery area (specified by the DB_RECOVERY_FILE_DEST_SIZE initialization parameter).

    If you choose the Advanced database configuration option, then you can specify a different disk quota value. After you create the database, you can also use Oracle Enterprise Manager to specify a different value.

    See Also:

    Oracle Database Backup and Recovery User's Guide for more information about sizing the fast recovery area.

6.3 Storage Requirements for Oracle Clusterware and Oracle RAC

Each supported file system type has additional requirements that must be met to support Oracle Clusterware and Oracle RAC. Use the following sections to help you select your storage option.

6.3.1 Requirements for Using a Cluster File System for Oracle Database Files

If you choose to place your Oracle Database software or data files on a clustered file system, then one of the following should be true:

  • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

  • You use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

The user account with which you perform the installation must be able to create the files in the path that you specify.

Note:

On Windows platforms, the only supported method for storing Oracle Clusterware files is Oracle ASM.

6.3.2 Identifying Storage Requirements for Using Oracle ASM for Shared Storage

Before installing Oracle Grid Infrastructure, you must identify and determine how many devices are available for use by Oracle ASM, the amount of free disk space available on each disk, and the redundancy level to use with Oracle ASM. When Oracle ASM provides redundancy, you must have sufficient capacity in each disk group to manage a re-creation of data that is lost after a failure of one or two failure groups.

Tip:

As you progress through the following steps, make a list of the raw device names you intend to use to create the Oracle ASM disk groups and have this information available during the Oracle Grid Infrastructure installation or when creating your Oracle RAC database.
  1. Determine whether you want to use Oracle ASM for Oracle Clusterware files (OCR and voting files), Oracle Database data files, recovery files, or all file types.

    Note:

    • You do not have to use the same storage mechanism for data files and recovery files. You can store one type of file in a cluster file system while storing the other file type within Oracle ASM. If you plan to use Oracle ASM for both data files and recovery files, then you should create separate Oracle ASM disk groups for the data files and the recovery files.

    • There are two types of Oracle Clusterware files: OCR files and voting files. Each type of file can be stored on either Oracle ASM or a cluster file system. All the OCR files or all the voting files must use the same type of storage. You cannot have some OCR files stored in Oracle ASM and other OCR files in a cluster file system. However, you can use one type of storage for the OCR files and a different type of storage for the voting files if all files of each type use the same type of storage.

  2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.

    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.

    A quorum failure group is a special type of failure group that stores the Oracle Clusterware voting files. The quorum failure group ensures that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group determines if the disk group can be mounted in the event of the loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.

    The redundancy levels are as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Oracle ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.

    • Normal redundancy

      A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (failure groups use two of the three disks and the quorum failure group uses all three disks) and provides three voting files and one OCR and mirror of the OCR. When using a normal redundancy disk group, the cluster can survive the loss of one failure group.

      For most installations, Oracle recommends that you select normal redundancy disk groups.

    • High redundancy

      In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (failure groups use three of the five disks and the quorum failure group uses all five disks) and provides five voting files and one OCR and two mirrors of the OCR. With high redundancy, the cluster can survive the loss of two failure groups.

      While high redundancy disk groups provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to use this redundancy level.

    Note:

    After a disk group is created, you cannot alter the redundancy level of the disk group.
  3. Determine the total amount of disk space that you require for the Oracle Clusterware files using Oracle ASM for shared storage.

    Use Table 6-2, "Oracle Clusterware Storage Required by Redundancy Type" to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware using Oracle ASM for shared storage:

    Table 6-2 Oracle Clusterware Storage Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Oracle Cluster Registry (OCR) Files Voting Files Both File Types Total

    External

    1

    400 MB

    300 MB

    700 MB

    At least 5.9 GB for a cluster with 4 nodes or less (5.2 GBFoot 1  + 400 MB + 300 MB).

    Additional space required for clusters with 5 or more nodes. For example, a six-node cluster allocation should be at least 6.9 GB:

    (5.2 GB + 2*(500 MB) +

    400 MB + 300 MB).

    Normal

    3

    At least 400 MB for each failure group, or 800 MB

    At least 300 MB for each voting file, or 900MB

    1.7 GB

    At least 12.1 GB for a cluster with 4 nodes or less (10.4 GB + 2*400 MB + 3*300 MB).

    Additional space required for clusters with 5 or more nodes. For example, for a six-node cluster allocation should be at least 14.1 GB:

    (2 * (5.2 GB + 2*(500 MB)) +

    (2 * 400 MB) + (3 * 300 MB)).

    High

    5

    At least 400 MB for each failure group, or 1.2 GB

    At least 300 MB for each voting file, or 1.5 GB

    2.7 GB

    At least 18.3 GB for a cluster with 4 nodes or less (3* 5.2 GB + 3*400 MB + 5*300 MB).

    Additional space required for clusters with 5 or more nodes. For example, for a six-node cluster allocation should be at least 21.3 GB:

    (3* (5.2 GB + 2*(500 MB))+

    (3 * 400 MB) + (5 * 300 MB)).


    Footnote 1 The size of the Grid Infrastructure Management Repository for Oracle Clusterware 12.1.0.2 is 5.2 GB. For Oracle Clusterware 12.1.0.1, the size is 4.5 GB.

    Note:

    If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups.

    If you create a disk group for the OCR and voting files as part of the installation, then the installer requires that you create these files on a disk group with at least 2 GB of available space.

    To ensure high availability of Oracle Clusterware files on Oracle ASM, you must have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.

  4. Determine an allocation unit size. Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is the fundamental unit of allocation within a disk group. You can select the AU Size value from 1, 2, 4, 8, 16, 32 or 64 MB, depending on the specific disk group compatibility level. The default value is set to 1 MB.

  5. For Oracle Clusterware installations, you must also add additional disk space for the Oracle ASM metadata. You can use the following formula to calculate the disk space requirements (in MB) for OCR and voting files, and the Oracle ASM metadata:

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3

    • ausize = Metadata AU size in megabytes (default is 1 MB)

    • nodes = Number of nodes in cluster

    • clients = Number of database instances for each node

    • disks = Number of disks in disk group

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, and a default AU size of 1 MB, you require an additional 1684 MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64* 4)+ 533)] = 1684 MB

    To ensure high availability of Oracle Clusterware files on Oracle ASM, for a normal redundancy disk group, you must have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 2.1 GB of capacity, with total capacity of at least 6.3 GB for three disks, to ensure that the effective disk space to create Oracle Clusterware files is 2 GB.

  6. Determine the total amount of disk space that you require for the Oracle database files and recovery files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:

    Table 6-3 Total Oracle Database Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types

    External

    1

    1.5 GB

    3 GB

    4.5 GB

    Normal

    2

    3 GB

    6 GB

    9 GB

    High

    3

    4.5 GB

    9 GB

    13.5 GB


    Note:

    The file sizes listed in the previous table are estimates of minimum requirements for a new installation (or a database without any user data). The file sizes for your database will be larger.
  7. Determine if you can use an existing disk group.

    If an Oracle ASM instance currently exists on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation. See Section 6.3.3.1, "Identifying Existing Oracle ASM Disk Groups to Use" for more information about using an existing disk group.

  8. If there is no existing Oracle ASM disk group to use, then create one or more Oracle ASM disk groups, as needed, before installing Oracle RAC. See Section 6.3.3.2, "Selecting Disks to use with Oracle ASM Disk Groups" for more information about selecting disks to use in a disk group.

  9. Optionally, identify failure groups for the Oracle ASM disk group devices.

    Note:

    You only have to complete this step if you plan to use an installation method that includes configuring Oracle ASM disk groups before installing Oracle RAC, or creating an Oracle RAC database.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. Failure groups define Oracle ASM disks that share a common potential failure mechanism. By default, each device comprises its own failure group. If you choose to define custom failure groups, then note the following:

    • You must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

    • If the disk group contains data files and Oracle Clusterware files, including the voting files, then you must specify a minimum of three failure groups for normal redundancy disk groups and five failure groups for high redundancy disk groups.

    • Disk groups containing voting files must have at least three failure groups for normal redundancy or at least five failure groups for high redundancy. If the disk group does not contain the voting files, then the minimum number of required failure groups is two for normal redundancy and three for high redundancy. The minimum number of failure groups applies whether or not they are custom failure groups.

    If two disk devices in a normal redundancy disk group are attached to the same small computer system interface (SCSI) controller, then the disk group becomes unavailable if the controller fails. The SCSI controller in this example is a single point of failure. To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration enables the disk group to tolerate the failure of one SCSI controller.

    Note:

    You can define custom failure groups after installation of Oracle Grid Infrastructure using the GUI tool ASMCA, the command-line tool asmcmd, or SQL*Plus commands.

6.3.3 Preparing Your System to Use Oracle ASM for Shared Storage

To use Oracle ASM as the shared storage solution for Oracle Clusterware or Oracle RAC files, you must perform certain tasks before you begin the software installation.

6.3.3.1 Identifying Existing Oracle ASM Disk Groups to Use

To use Oracle ASM as the storage option for either database or recovery files, you must use an existing Oracle ASM disk group, or use ASMCA to create the necessary disk groups before installing Oracle Database 12c Release 1 (12.1) and creating an Oracle RAC database.

To determine if an Oracle ASM disk group currently exists, or to determine whether there is sufficient disk space in an existing disk group, the Oracle ASM command line tool (asmcmd), or Oracle Enterprise Manager Cloud Control. Alternatively, you can use the following procedure:

  1. In the Services Control Panel, ensure that the OracleASMService+ASMn service, where n is the node number, has started.

  2. Open a Windows command prompt and temporarily set the ORACLE_SID environment variable to specify the appropriate value for the Oracle ASM instance to use.

    For example, if the Oracle ASM system identifier (SID) is named +ASM1, then enter a setting similar to the following:

    C:\> set ORACLE_SID=+ASM1
    
  3. If the ORACLE_HOME environment variable is not set to the Grid home, then temporarily set this variable to the location of the Grid home using a command similar to the following:

    C:\> set ORACLE_HOME=C:\app\12.1.0\grid
    
  4. Use ASMCMD to connect to the Oracle ASM instance and start the instance if necessary with a command similar to the following:

    C:\> %ORACLE_HOME%\bin\asmcmd
    ASMCMD> startup
    
  5. Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each disk group:

    ASMCMD> lsdg
    

    or:

    C:\> %ORACLE_HOME%\bin\asmcmd -p lsdg
    
  6. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  7. If necessary, install, or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

6.3.3.2 Selecting Disks to use with Oracle ASM Disk Groups

If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

  • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

  • Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.

  • Nonshared logical partitions are not supported with Oracle RAC. To use logical partitions for your Oracle RAC database, you must use shared logical volumes created by a logical volume manager such as diskpart.msc.

  • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.

6.3.3.3 Specifying the Oracle ASM Disk Discovery String

When an Oracle ASM instance is initialized, Oracle ASM discovers and examines the contents of all of the disks that are in the paths that you designated with values in the ASM_DISKSTRING initialization parameter. The value for the ASM_DISKSTRING initialization parameter is an operating system–dependent value that Oracle ASM uses to limit the set of paths that the discovery process uses to search for disks. The exact syntax of a discovery string depends on the platform, ASMLib libraries, and whether Oracle Exadata disks are used. The path names that an operating system accepts are always usable as discovery strings.

The default value of ASM_DISKSTRING might not find all disks in all situations. If your site is using a third-party vendor ASMLib, then the vendor might have discovery string conventions that you must use for ASM_DISKSTRING. In addition, if your installation uses multipathing software, then the software might place pseudo-devices in a path that is different from the operating system default.

See Also:

6.3.4 Restrictions for Disk Partitions Used By Oracle ASM

Be aware of the following restrictions when configuring disk partitions for use with Oracle ASM:

  • You cannot use primary partitions for storing Oracle Clusterware files while running OUI to install Oracle Clusterware as described in Chapter 7, "Installing Oracle Grid Infrastructure for a Cluster". You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.

  • With x64 Windows, you can create up to 128 primary partitions for each disk.

  • You can create shared directories only on primary partitions and logical drives.

  • Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.

For these reasons, you might prefer to use extended partitions for storing Oracle software files and not primary partitions.

6.3.5 Requirements for Using a Shared File System

To use a shared file system for Oracle RAC, the file system must comply with the following requirements:

  • To use NFS, it must be on a certified network attached storage (NAS) device. Access the My Oracle Support website as described in Section 3.4, "Checking Hardware and Software Certification on My Oracle Support" to find a list of certified NAS devices.

  • If placing the Oracle RAC data files on a shared file system, then one of the following should be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • The file systems consist of at least two independent file systems, with the data files on one file system, and the recovery files on a different file system.

  • The user account with which you perform the installation must be able to create the files in the path that you specify for the shared storage.

Note:

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting file partitions, then you must extend these partitions to at least 300 MB. Oracle recommends that you do not use partitions, but instead place OCR and voting files in Oracle ASM disk groups marked as QUORUM disk groups.

All storage products must be supported by both your server and storage vendors.

Use the following table to determine the minimum size for shared file systems:

Table 6-4 Oracle RAC Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Database data files

1

At least 1.5 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


The total required volume size is cumulative. For example, if you use one volume for data files and one volume for recovery files, then you should have at least 3.5 GB of available storage over two volumes.

6.3.6 Requirements for Files Managed by Oracle

If you use Oracle ASM for your database files, then Oracle creates the database with Oracle Managed files by default. When using the Oracle Managed files feature, you need specify only the database object name instead of file names when creating or deleting database files.

See Also:

"Using Oracle-Managed Files" in Oracle Database Administrator's Guide

6.4 After You Have Selected the Shared Storage Options

When you have determined your disk storage options, first perform the steps listed in the section Section 6.5, "Preliminary Shared Disk Preparation", then configure the shared storage:

6.5 Preliminary Shared Disk Preparation

Complete the following steps to prepare shared disks for storage:

6.5.1 Disabling Write Caching

You must disable write caching on all disks that will be used to share data between the nodes in your cluster.

  1. Click Start, then select Control Panel, then Administrative Tools, then Computer Management, then Device Manager, and then Disk drives

  2. Expand the Disk drives and double-click the first drive listed.

  3. Under the Policies tab for the selected drive, uncheck the option that enables write caching.

  4. Double-click each of the other drives that will be used by Oracle Clusterware and Oracle RAC and disable write caching as described in the previous step.

Caution:

Any disks that you use to store files, including database files, that will be shared between nodes, must have write caching disabled.

6.5.2 Enabling Automounting for Windows

Even though the automount feature is enabled by default, you should verify that automount is enabled.

You must enable automounting when using:

  • Raw partitions for Oracle ASM

  • Oracle Clusterware

  • Logical drives for Oracle ASM

Note:

Raw partitions are supported only when upgrading an existing installation using the configured partitions. On new installations, using raw partitions is not supported by ASMCA or OUI, but is supported by the software if you perform manual configuration

To determine if automatic mounting of new volumes is enabled, use the following commands:

C:\> diskpart
DISKPART> automount
Automatic mounting of new volumes disabled.

To enable automounting:

  1. Enter the following commands at a command prompt:

    C:\> diskpart
    DISKPART> automount enable
    Automatic mounting of new volumes enabled.
    
  2. Type exit to end the diskpart session

  3. Repeat steps 1 and 2 for each node in the cluster.

  4. You must restart each node after enabling disk automounting. After it is enabled and the node is restarted, automatic mounting remains active until it is disabled.

Note:

All nodes in the cluster must have automatic mounting enabled to correctly install Oracle RAC and Oracle Clusterware. Oracle recommends that you enable automatic mounting before creating any logical partitions for use by the database or Oracle ASM.

6.6 Configuring Shared Storage for Oracle ASM

The installer does not suggest a default location for the OCR or the voting file. If you choose to create these files on Oracle ASM, then you must first create and configure disk partitions to be used in the Oracle ASM disk group.

6.6.1 Create Disk Partitions for Use With Oracle ASM

The following steps outline the procedure for creating disk partitions for use with Oracle ASM:

  1. Use Microsoft Computer Management utility or the command line tool diskpart to create an extended partition. Use a basic disk; dynamic disks are not supported.

  2. Create at least one logical partition for the Oracle Clusterware files. You do not have to create separate partitions for the OCR and voting file; Oracle Clusterware creates individual files for the OCR and voting file in the specified location.

  3. If your file system does not use a redundant array of inexpensive disks (RAID), then create an additional extended partition and logical partition for each partition that will be used by Oracle Clusterware files, to provide redundancy.

To create the required partitions, use the Disk Management utilities available with Microsoft Windows. Use a basic disk with a Master Boot Record (MBR) partition style as an extended partition for creating partitions.

  1. From an existing node in the cluster, run the Windows disk administration tool as follows:

    See Section 6.7, "Configuring Disk Partitions on Shared Storage" for instructions on creating disk partitions using the DISKPART utility.

  2. On each node in the cluster, ensure that the partitions are visible and that none of the disk partitions created for shared storage have drive letters assigned. If any partitions have drive letters assigned, then remove them by performing these steps:

    • Right-click the partition in the Windows disk administration tool

    • Select "Change Drive Letters and Paths" from the menu

    • Click Remove in the "Change Drive Letter and Paths" window

6.6.2 Marking Disk Partitions for Oracle ASM Before Installation

The only partitions that OUI displays for Windows systems are logical drives that are on disks that do not contain a primary partition, and have been marked (or stamped) with asmtool. Configure the disks before installation either by using asmtoolg (graphical user interface (GUI) version) or using asmtool (command line version). You also have the option of using the asmtoolg utility during Oracle Grid Infrastructure for a cluster installation.

The asmtoolg and asmtool utilities only work on partitioned disks; you cannot use Oracle ASM on unpartitioned disks. You can also use these tools to reconfigure the disks after installation. These utilities are installed automatically as part of Oracle Grid Infrastructure.

Note:

If user account control (UAC) is enabled, then running asmtoolg or asmtool requires administrator-level permissions.

6.6.2.1 Overview of asmtoolg and asmtool

The asmtoolg and asmtool tools associate meaningful, persistent names with disks to facilitate using those disks with Oracle ASM. Oracle ASM uses disk strings to operate more easily on groups of disks at the same time. The names that asmtoolg or asmtool create make this easier than using Windows drive letters.

All disk names created by asmtoolg or asmtool begin with the prefix ORCLDISK followed by a user-defined prefix (the default is DATA), and by a disk number for identification purposes. You can use them as raw devices in the Oracle ASM instance by specifying a name \\.\ORCLDISKprefixn, where prefix either can be DATA, or a value you supply, and where n represents the disk number.

6.6.2.2 Using asmtoolg (Graphical User Interface) To Mark Disks

Use asmtoolg (GUI version) to create device names; use asmtoolg to add, change, delete, and examine the devices available for use in Oracle ASM.

  1. In the installation media for Oracle Grid Infrastructure, go the asmtool folder and double-click asmtoolg.

    If Oracle Clusterware is installed, then go to the Grid_home\bin folder and double-click asmtoolg.exe.

    If user access control (UAC) is enabled, then you must create a desktop shortcut to a command window. Open the command window using the Run as Administrator, right-click the context menu, and launch asmtoolg.

  2. Select the Add or change label option, and then click Next.

    asmtoolg shows the devices available on the system. Unrecognized disks have a status of "Candidate device", stamped disks have a status of "Stamped ASM device," and disks that have had their stamp deleted have a status of "Unstamped ASM device." The tool also shows disks that are recognized by Windows as a file system (such as NTFS). These disks are not available for use as Oracle ASM disks, and cannot be selected. In addition, Microsoft Dynamic disks are not available for use as Oracle ASM disks.

    If necessary, follow the steps under Section 6.6.1, "Create Disk Partitions for Use With Oracle ASM" to create disk partitions for the Oracle ASM instance.

  3. On the Stamp Disks window, select the disks to you want to use with Oracle ASM.

    For ease of use, Oracle ASM can generate unique stamps for all of the devices selected for a given prefix. The stamps are generated by concatenating a number with the prefix specified. For example, if the prefix is DATA, then the first Oracle ASM link name is ORCLDISKDATA0.

    You can also specify the stamps of individual devices.

  4. Optionally, select a disk to edit the individual stamp (Oracle ASM link name).

  5. Click Next.

  6. Click Finish.

6.6.2.3 Using asmtoolg To Remove Disk Stamps

You can use asmtoolg (GUI version) to delete disk stamps.

  1. In the installation media for Oracle Grid Infrastructure, go the asmtool folder and double-click asmtoolg.

    If Oracle Clusterware is installed, then go to the Grid_home\bin folder and double-click asmtoolg.exe.

    If user access control (UAC) is enabled, then you must create a desktop shortcut to a command window. Open the command window using the Run as Administrator, right-click the context menu, and launch asmtoolg.

  2. Select the Delete labels option, then click Next.

    The delete option is only available if disks exist with stamps. The delete screen shows all stamped Oracle ASM disks.

  3. On the Delete Stamps screen, select the disks to unstamp.

  4. Click Next.

  5. Click Finish.

6.6.2.4 asmtool Command Line Reference

asmtool is a command-line interface for marking (or stamping) disks to be used with Oracle ASM.

Option Description Example
-add Adds or changes stamps. You must specify the hard disk, partition, and new stamp name. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the -force option.

If necessary, follow the steps under Section 6.6.1, "Create Disk Partitions for Use With Oracle ASM" to create disk partitions for the Oracle ASM instance.

asmtool -add [-force]
\Device\Harddisk1\Partition1 ORCLDISKASM0
\Device\Harddisk2\Partition1 ORCLDISKASM2
...
-addprefix Adds or changes stamps using a common prefix to generate stamps automatically. The stamps are generated by concatenating a number with the prefix specified. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the -force option.
asmtool -addprefix ORCLDISKASM [-force]
\Device\Harddisk1\Partition1
\Device\Harddisk2\Partition1
...
-create Creates an Oracle ASM disk device from a file instead of a partition.

Note: Usage of this command is not supported for production environments.

asmtool -create \\server\share\file 1000
asmtool -create D:\asm\asmfile02.asm 240
-list List available disks. The stamp, windows device name, and disk size in MB are shown.
asmtool -list
-delete Removes existing stamps from disks.
asmtool -delete ORCLDISKASM0 ORCLDISKASM1...

If user access control (UAC) is enabled, then you must create a desktop shortcut to a command window. Open the command window using the Run as Administrator, right-click the context menu, and launch asmtool.

Note:

If you use -add, -addprefix, or -delete, asmtool notifies the Oracle ASM instance on the local node and on other nodes in the cluster, if available, to rescan the available disks.

6.7 Configuring Disk Partitions on Shared Storage

To create disk partitions, use the disk administration tools provided by the operating system or third party vendors. You can create the disk partitions using either the Disk Management Interface or the DiskPart utility, both of which are provided by the operating system.

To use shared disks not managed by Oracle ASM for the Oracle home and data files, the following partitions, at a minimum, must exist before you run OUI to install Oracle Clusterware:

  • 5.5 GB or larger partition for the Oracle home, if you want a shared Oracle home

  • 3 GB or larger partitions for the Oracle Database data files and recovery files

6.7.1 Creating Disk Partitions Using the Disk Management Interface

Use the graphical user interface Disk Management snap-in to manage disks.

  1. To access the Disk Management snap-in, do one of the following:

    1. Type diskmgmt.msc at the command prompt

    2. From the Start menu, select All Programs, then Administrative Tools, then Computer Management. Then select the Disk Management node in the Storage tree.

  2. Create primary partitions and logical drives in extended partitions by selecting the New Simple Volume option. You must select Do not format this partition to specify raw partition. Do not use spanned volumes or striped volumes. These options convert the volume to a dynamic disk. Oracle Automatic Storage Management does not support dynamic disks.

    For other Windows, create primary partitions by selecting the New Partition option. Create the logical drives by selecting the New Logical Drive option.

  3. To create a raw device, after the partition is created, you must remove the drive letter that was assigned to the partition.

6.7.2 Creating Disk Partitions using the DiskPart Utility

To create the required partitions, perform the following steps:

  1. From an existing node in the cluster, run the DiskPart utility as follows:

    C:\> diskpart
    DISKPART>
    
  2. List the available disks. By specifying its disk number (n), select the disk on which you want to create a partition.

    DISKPART> list disk
    DISKPART> select disk n
    
  3. Create an extended partition:

    DISKPART> create part ext
    
  4. Create a logical drive of the desired size after the extended partition is created using the following syntax:

    DISKPART> create part log [size=n] [offset=n] [noerr]
    
  5. Repeat steps 2 through 4 for the second and any additional partitions. An optimal configuration is one partition for the Oracle home and one partition for Oracle Database files.

  6. List the available volumes, and remove any drive letters from the logical drives you plan to use.

    DISKPART> list volume
    DISKPART> select volume n
    DISKPART> remove
    
  7. Check all nodes in the cluster to ensure that the partitions are visible on all the nodes and to ensure that none of the Oracle partitions have drive letters assigned. If any partitions have drive letters assigned, then remove them by performing these steps:

    • Right-click the partition in the Windows Disk Management utility

    • Select "Change Drive Letters and Paths" from the menu

    • Click Remove in the "Change Drive Letter and Paths" window

6.8 Configuring Direct NFS Client for Oracle RAC Data Files

Direct NFS Client is an interface for NFS systems provided by Oracle.

6.8.1 About Direct NFS Client Storage

With Oracle Database, instead of using the operating system NFS client or third-party NFS client, you can configure Oracle Database to use Direct NFS Client NFS to access NFS servers directly. Direct NFS Client supports NFSv3, NFSv4 and NFSv4.1 protocols (excluding the Parallel NFS extension) to access the NFS server. Direct NFS Client tunes itself to make optimal use of available resources and enables the storage of data files on supported NFS servers.

Note:

Use NFS servers supported for Oracle RAC. Check My Oracle Support, as described in Section 3.4, "Checking Hardware and Software Certification on My Oracle Support" for support information.

To enable Oracle Database to use Direct NFS Client, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. Direct NFS Client manages settings after installation. If Oracle Database cannot open an NFS server using Direct NFS Client, then an informational message is logged into the Oracle alert log. A trace file is also created, indicating that Direct NFS Client could not connect to an NFS server.

Note:

Direct NFS does not work if the backend NFS server does not support a write size (wtmax) of 32768 or larger.

The Oracle files resident on the NFS server that are accessed by Direct NFS Client can also be accessed through a third party NFS client. Management of Oracle data files created with Direct NFS Client should be done according to the guidelines specified in the "Managing Datafiles and Tempfiles" chapter of Oracle Database Administrator's Guide.

Volumes mounted through Common Internet File System (CIFS) can not be used for storing Oracle database files without configuring Direct NFS Client. The atomic write requirements needed for database writes are not guaranteed through the CIFS protocol, consequently CIFS can only be used for OS level access, for example, for commands such as copy.

Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS to operate. To disable reserved port checking, consult your NFS file server documentation.

For NFS servers that restrict port range, you can use the insecure option to enable clients other than an Administrator user to connect to the NFS server. Alternatively, you can disable Direct NFS Client as described in Section 6.8.9, "Disabling Oracle Disk Management Control of NFS for Direct NFS Client."

6.8.2 About the Oranfstab File for Direct NFS Client

If you use Direct NFS Client, then you must create a configuration file, oranfstab, to specify the options, attributes, and parameters that enable Oracle Database to use Direct NFS Client. Direct NFS Client looks for the mount point entries in oranfstab. It uses the first matched entry as the mount point. You must create the oranfstab file in the Oracle_home\dbs directory.

When the oranfstab file is placed in Oracle_home\dbs, the entries in the file are specific to a single database. For Oracle RAC installations in a shared Oracle home, the oranfstab file is globally available to all database instances. All instances that use the shared Oracle home use the same Oracle_home\dbs\oranfstab file. For a nonshared Oracle home, because all the Oracle RAC instances use the same oranfstab file, you must replicate the oranfstab file on all of the nodes. Also, you must keep the oranfstab file synchronized on all the nodes.

Note:

If you remove an NFS path from oranfstab that Oracle Database is using, then you must restart the database for the change to be effective. In addition, the mount point that you use for the file system must be identical on each node.

See Also:

Section 6.8.6, "Enabling Direct NFS Client" for more information about creating the oranfstab file

6.8.3 Configurable Attributes for the oranfstab File

Attribute Description
server The NFS server name
path Up to four network paths to the NFS server, specified either by internet protocol (IP) address, or by name, as displayed using the ifconfig command on the NFS server
local Up to 4 network interfaces on the database host, specified by IP address, or by name, as displayed using the ipconfig command on the database host.
export The exported path from the NFS server. Use a UNIX-style path
mount The corresponding local mount point for the exported volume. Use a Windows-style path
mnt_timeout (Optional) Specifies the time (in seconds) for which Direct NFS Client should wait for a successful mount before timing out. The default timeout is 10 minutes (600).
uid (Optional) The UNIX user ID to be used by Direct NFS Client to access all NFS servers listed in oranfstab. The default value is uid:65534, which corresponds to user:nobody on the NFS server.
gid (Optional) The UNIX group ID to be used by Direct NFS Client to access all NFS servers listed in oranfstab. The default value is gid:65534, which corresponds to group:nogroup on the NFS server.
nfs_version (Optional) Specifies the NFS protocol that Direct NFS Client uses. Possible values are NFSv3, NFSv4, and NFSv4.1. The default version is NFSv3. To specify NFSv4 or NFSv4.1, you must set the nfs_version parameter accordingly in the oranfstab file.
management Enables Direct NFS Client to use the management interface for SNMP queries. You can use this parameter if SNMP is running on separate management interfaces on the NFS server. The default value is server.
community Specifies the community string for use in SNMP queries. Default value is public.

See Also:

"Limiting Asynchronous I/O in NFS Server Environments" in Oracle Database Performance Tuning Guide

6.8.4 Mounting NFS Storage Devices with Direct NFS Client

Direct NFS Client determines mount point settings for NFS storage devices based on the configuration information in oranfstab. Direct NFS Client uses the first matching entry as the mount point. If Oracle Database cannot open an NFS server using Direct NFS Client, then an error message is written into the Oracle alert and trace files indicating that Direct NFS Client could not be established.

Note:

You can have only one active Direct NFS Client implementation for each instance. Using Direct NFS Client on an instance will prevent another Direct NFS Client implementation.

Direct NFS Client requires an NFS server supporting NFS read/write buffers of at least 16384 bytes.

Direct NFS Client issues writes at wtmax granularity to the NFS server. Direct NFS Client does not serve an NFS server with a wtmax less than 16384. Oracle recommends that you use the value 32768.

See Also:

Section 6.1.1, "Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC" for a list of the file types that are supported with Direct NFS Client

6.8.5 Specifying Network Paths for a NFS Server

Direct NFS Client can use up to four network paths defined in the oranfstab file for an NFS server. Direct NFS Client performs load balancing across all specified paths. If a specified path fails, then Direct NFS Client re-issues all outstanding requests over any remaining paths.

Note:

You can have only one active Direct NFS Client implementation for each instance. Using Direct NFS Client on an instance prevents the use of another Direct NFS Client implementation.

Example 6-1 and Example 6-2 provide examples of configuring network paths for Direct NFS Client attributes in an oranfstab file.

6.8.6 Enabling Direct NFS Client

To enable Direct NFS Client, you must add an oranfstab file to Oracle_home\dbs. When oranfstab is placed in this directory, the entries in this file are specific to one particular database. Direct NFS Client searches for the mount point entries as they appear in oranfstab. Direct NFS Client uses the first matched entry as the mount point.

  1. Create an oranfstab file and specify the attributes listed in Section 6.8.3, "Configurable Attributes for the oranfstab File" for each NFS server that Direct NFS Client accesses:

    See Also:

    "Limiting Asynchronous I/O in NFS Server Environments" in Oracle Database Performance Tuning Guide

    The mount point specified in the oranfstab file represents the local path where the database files would reside normally, as if Direct NFS Client was not used. For example, if the location for the data files would be C:\app\oracle\oradata\orcl directory if the database did not use Direct NFS Client, then you specify C:\app\oracle\oradata\orcl for the NFS virtual mount point in the corresponding oranfstab file.

    Example 6-1 and Example 6-2 provide examples of how Direct NFS Client attributes can be used in an oranfstab file.

    Note:

    • Direct NFS Client ignores a uid or gid value of 0.

    • The exported path from the NFS server must be accessible for read/write/execute by the user with the uid, gid specified in oranfstab. If neither uid nor gid is listed, then the exported path must be accessible by the user with uid:65534 and gid:65534.

  2. Replace the standard ODM library, oraodm12.dll, with the ODM NFS library.

    Oracle Database uses the ODM library, oranfsodm12.dll, to enable Direct NFS Client. To replace the ODM library, complete the following steps:

    1. Change directory to Oracle_home\bin.

    2. Shut down the Oracle Database instance on a node using the Server Control Utility (SRVCTL).

    3. Enter the following commands:

      copy oraodm12.dll oraodm12.dll.orig
      copy /Y oranfsodm12.dll oraodm12.dll 
      
    4. Restart the Oracle Database instance using SRVCTL.

    5. Repeat Step a to Step d for each node in the cluster.

Example 6-1 oranfstab File Using Local and Path NFS Server Entries

The following example of an oranfstab file shows an NFS server entry, where the NFS server, MyDataServer1, uses two network paths specified with IP addresses.

server: MyDataServer1
local: 192.0.2.0
path: 192.0.2.1
local: 192.0.100.0
path: 192.0.100.1
nfs_version: nfsv3
export: /vol/oradata1 mount: C:\APP\ORACLE\ORADATA\ORCL
 

Example 6-2 oranfstab File Using Network Connection Names

The following example of an oranfstab file shows an NFS server entry, where the NFS server, MyDataServer2, uses four network paths specified by the network interface to use, or the network connection name. Multiple export paths are also used in this example.

server: MyDataServer2
local: LocalInterface1
path: NfsPath1
local: LocalInterface2
path: NfsPath2
local: LocalInterface3
path: NfsPath3
local: LocalInterface4
path: NfsPath4
nfs_version: nfsv4
export: /vol/oradata2 mount: C:\APP\ORACLE\ORADATA\ORCL2
export: /vol/oradata3 mount: C:\APP\ORACLE\ORADATA\ORCL3
management: MgmtPath1
community: private

6.8.7 Performing Basic File Operations Using the ORADNFS Utility

ORADNFS is a utility which enables the database administrators to perform basic file operations over Direct NFS Client on Microsoft Windows platforms.

ORADNFS is a multi-call binary, which is a single binary that acts like many utilities.

You must be a member of the local ORA_DBA group to use ORADNFS. A valid copy of the oranfstab configuration file must be present in Oracle_home\dbs for ORADNFS to operate.

  • To execute commands using ORADNFS you issue the command as an argument on the command line.

    The following command prints a list of commands available with ORADNFS:

    C:\> oradnfs help
    

    To display the list of files in the NFS directory mounted as C:\ORACLE\ORADATA, use the following command:

    C:\> oradnfs ls C:\ORACLE\ORADATA\ORCL
    

6.8.8 Monitoring Direct NFS Client Usage

Use the following global dynamic performance views for managing Direct NFS Client usage with your Oracle RAC database:

  • GV$DNFS_SERVERS: Lists the servers that are accessed using Direct NFS Client.

  • GV$DNFS_FILES: Lists the files that are currently open using Direct NFS Client.

  • GV$DNFS_CHANNELS: Lists the open network paths, or channels, to servers for which Direct NFS Client is providing files.

  • GV$DNFS_STATS: Lists performance statistics for Direct NFS Client.

6.8.9 Disabling Oracle Disk Management Control of NFS for Direct NFS Client

If you no longer want to use the Direct NFS client, you can disable it.

  1. Log in as the Oracle Grid Infrastructure software owner.

  2. Restore the original oraodm12.dll file by reversing the process you completed in Section 6.8.6, "Enabling Direct NFS Client."

  3. Remove the oranfstab file.

6.9 Upgrading Existing Oracle ASM Instances

If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA) to upgrade the existing Oracle ASM instance to Oracle ASM 12c Release 1 (12.1).

The ASMCA utility is located in the path Grid_home\bin. You can also use ASMCA to configure failure groups, Oracle ASM volumes and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).

Note:

You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.

During installation, if you are upgrading from an Oracle ASM release before 11.2, and you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM release installed in another Oracle ASM home, then after installing the Oracle ASM 12c Release 1 (12.1) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.

If you are upgrading from Oracle ASM 11g Release 2 (11.2.0.1) or later, then Oracle ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling upgrade, and ASMCA is started during the upgrade. ASMCA cannot perform a separate upgrade of Oracle ASM from a prior release to the current release.

On an existing Oracle Clusterware or Oracle RAC installation, if the prior release of Oracle ASM instances on all nodes is Oracle ASM 11g Release 1 or later, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior release of the Oracle ASM instances for an Oracle RAC installation are from a release prior to Oracle ASM 11g Release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to Oracle ASM 12c Release 1 (12.1).

6.10 Configuring Oracle Automatic Storage Management Cluster File System

If you want to install Oracle RAC on Oracle ACFS, you must first create the Oracle home directory in Oracle ACFS.

You can also create a General Purpose File System configuration of ACFS using ASMCA. Oracle ACFS is installed as part of an Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle Automatic Storage Management) for 12c Release 1 (12.1).

The compatibility parameters COMPATIBLE.ASM and COMPATIBLE.ADVM must be set to 11.2 or higher for the disk group to contain an Oracle ADVM volume.

To create the Oracle home for your Oracle RAC database in Oracle ACFS, perform the following steps:

  1. Install Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle ASM).

  2. Go to the bin directory in the Grid home, for example:

    C:\> cd app\12.1.0\grid\bin
    
  3. Ensure that the Oracle Grid Infrastructure installation owner has read and write permissions on the storage mountpoint you want to use. For example, to use the mountpoint E:\data\acfsmounts\

    C:\..bin> dir /Q E:\data\acfsmounts
    
  4. Start ASMCA as the Oracle Installation user for Oracle Grid Infrastructure, for example:

    C:\..\bin> asmca
    

    The Configure ASM: Disk Groups page is displayed.

  5. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk group you created during installation. Click the ASM Cluster File Systems tab.

  6. On the ASM Cluster File Systems page, right-click the disk group in which you want to create the Oracle ADVM volume, then select Create ACFS for Database Home.

  7. In the Create ACFS Hosted Database Home window, enter the following information:

    • Database Home ADVM Volume Device Name: Enter the name of the database home. The name must be unique in your enterprise, for example: racdb_01

    • Database Home Mountpoint: Enter the directory path or logical drive letter for the mountpoint. For example: M:\acfsdisks\racdb_01

      Make a note of this mountpoint for future reference.

    • Database Home Size (GB): Enter in gigabytes the size you want the database home to be.

    • Database Home Owner Name: Enter the name of the Oracle Installation user you plan to use to install the database. For example: oracle1

  8. Click OK when you have entered the required information.

  9. If prompted, run any scripts as directed by ASMCA as the Local Administrator user.

    On an Oracle Clusterware environment, the script registers the ACFS as a resource managed by Oracle Clusterware. Registering ACFS as a resource helps Oracle Clusterware to mount the ACFS automatically in the proper order when ACFS is used for an Oracle RAC database home.

  10. During Oracle RAC 12c Release 1 (12.1) installation, ensure that you or the database administrator who installs Oracle RAC selects for the Oracle home the mountpoint you provided in the Database Home Mountpoint field (in the preceding example, this was M:\acfsdisks\racdb_01).

Note:

You cannot place the Oracle home directory for Oracle Database 11g Release 1 or earlier releases on Oracle ACFS.

See Also:

Oracle Automatic Storage Management Administrator's Guide for more information about configuring and managing your storage with Oracle ACFS