Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Microsoft Windows x64 (64-Bit) Part Number E18029-04 |
|
|
PDF · Mobi · ePub |
This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.
This chapter contains the following topics:
Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC
Configuring Storage for Oracle Database Files on OCFS for Windows
Configuring Oracle Automatic Storage Management Cluster File System
This section describes supported options for storing Oracle Grid Infrastructure for a cluster software and shared files. It contains the following sections:
See Also:
The Certification page in My Oracle Support for a list of supported vendors for Network Attached Storage options. See Section 2.4, "Checking Hardware and Software Certification on My Oracle Support" for instructions on how to access the certification information.Oracle Clusterware voting disks are used to monitor cluster node status, and the Oracle Cluster Registry (OCR) is a file that contains the configuration information and status of the cluster. The installer automatically initializes the OCR during the Oracle Clusterware installation. Oracle Database Configuration Assistant (DBCA) uses the OCR for storing the configurations for the cluster databases that it creates.
You can place voting disks and OCR files either in an Oracle ASM disk group, or on a cluster file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted.
Note:
To store the Oracle Clusterware files in an Oracle ASM disk group, the disk group compatibility must be at least 11.2, which is the default for new installations of Oracle Grid Infrastructure. If you are upgrading an Oracle ASM installation, then see Oracle Automatic Storage Management Administrator's Guide for more information about disk group compatibility.For a storage option to meet high availability requirements, the files stored on the disk must be protected by data redundancy, so that if one or more disks fail, then the data stored on the failed disks can be recovered. This redundancy can be provided externally using Redundant Array of Independent Disks (RAID) devices, or logical volumes on multiple physical devices and implement the stripe-and-mirror- everything methodology, also known as SAME. If you do not have a RAID devices or logical volumes, then you can create additional copies, or mirrors, of the files on different file systems. If you choose to mirror the files, then you must provide disk space for additional OCR files and at least two additional voting disk files.
Each OCR location should be placed on a different disk. For voting disk file placement, ensure that each file is configured so that it does not share any hardware device or disk, or other single point of failure with the other voting disks. Any node that does not have available to it an absolute majority of voting disks configured (more than half) will be restarted.
Use the following guidelines when choosing storage options for the Oracle Clusterware files:
You can choose any combination of the supported storage options for each file type if you satisfy all requirements listed for the chosen storage options.
You can use Oracle ASM 11g release 2 (11.2) for shared storage. You cannot use earlier Oracle ASM releases to do this.
If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk locations to provide redundancy.
The Oracle Grid Infrastructure home (Grid home) cannot be stored on a shared file system; it must be installed on a local disk.
If you choose to store Oracle Clusterware files on Oracle ASM and use redundancy for the disk group, then Oracle ASM automatically maintains the ideal number of voting files based on the redundancy of the diskgroup. The voting files are created within a single disk group and you cannot add extra voting files to this disk group manually.
For all Oracle RAC installations, you must choose the shared storage options to use for Oracle Database files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. To enable automated backups during the installation, you must also choose a shared storage option to use for recovery files (the fast recovery area).
Use the following guidelines when choosing the storage options to use for the Oracle Database files:
The shared storage option that you choose for recovery files can be the same as or different from the shared storage option that you choose for the data files. However, you cannot use raw devices to store recovery files.
Raw devices are supported only when upgrading an existing installation and using the previously configured raw partitions. On new installations, using raw disks or partitions is not supported by Oracle Automatic Storage Management Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is supported by the Oracle RAC if you perform manual configuration of the database.
See Also:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseYou can choose any combination of the supported shared storage options for each file type if you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Oracle ASM as the shared storage option for the database data files and recovery files.
For Standard Edition Oracle RAC installations, Oracle ASM is the only supported shared storage option for database or recovery files. You must use Oracle ASM for the storage of Oracle RAC data files, online redo logs, archived redo logs, control files, server parameter file (SPFILE), and the fast recovery area.
If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:
All the nodes in the cluster must have Oracle Clusterware and Oracle ASM 11g release 2 (11.2) installed as part of an Oracle Grid Infrastructure for a cluster installation.
Any existing Oracle ASM instance on any node in the cluster is shut down before installing Oracle RAC or creating the Oracle RAC database.
During Oracle Grid Infrastructure installation, you can create one disk group. After the Oracle Grid Infrastructure installation, you can create additional disk groups using ASMCA, SQL*Plus, or ASMCMD. Note that with Oracle Database 11g release 2 (11.2) and later releases, Oracle Database Configuration Assistant (DBCA) does not have the functionality to create disk groups for Oracle ASM.
If you install Oracle Database or Oracle RAC after you install Oracle Grid Infrastructure, then you can either use the same disk group for database files, OCR, and voting disk files, or you can use different disk groups. If you create multiple disk groups before installing Oracle RAC or before creating a database, then you can decide whether you want to:
Place the data files in the same disk group as the Oracle Clusterware files
Use the same Oracle ASM disk group for data files and recovery files
Use different disk groups for each file type
If you create only one disk group for storage, then the OCR and voting disk files, database files, and recovery files are contained in the one disk group. If you create multiple disk groups for storage, then you can choose to place files in different disk groups.
Note:
The Oracle ASM instance that manages the existing disk group should be running in the Grid home.See Also:
Oracle Database Installation Guide for Microsoft Windows for information about configuring a database to use Oracle ASM storage
Oracle Automatic Storage Management Administrator's Guide for information about creating disk groups
Network attached storage (NAS) systems use a network file system (NFS) to access data. You can store Oracle RAC data files and recovery files on a supported NAS server using the Oracle Direct NFS client.
The NFS file system must be mounted and available before you start the Oracle RAC installation. See your vendor documentation for NFS configuration and mounting information.
Note that the performance of Oracle Database software and the databases that use NFS storage depend on the performance of the network connection between the database server and the NAS device. For this reason, Oracle recommends that you connect the database server (or cluster node) to the NAS device using a private, dedicated, network connection, which should be Gigabit Ethernet or better.
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) provides a general purpose file system. You can place the Oracle home for an Oracle Database 11g release 2 (11.2) database on Oracle ACFS, but you cannot place Oracle data files or Oracle Clusterware files on Oracle ACFS. Note the following about Oracle ACFS:
You cannot put Oracle Clusterware executable files or shared files on Oracle ACFS.
You cannot put Oracle Database data files or recovery files on Oracle ACFS.
You can put Oracle Database executable files and administrative files (for example, trace files) on Oracle ACFS.
Oracle ACFS provides a general purpose file system for file types other than Oracle data files.
Note:
For Oracle ASM 11g release 2 (11.2.0.1), Oracle ACFS and Oracle ASM Dynamic Volume Manager (Oracle ADVM) are supported only in the following environments:Windows Server 2003, x64
Windows Server 2003 R2, x64
Starting with Oracle ASM 11g release 2 (11.2.0.2), Oracle ACFS and Oracle ADVM are also supported on Windows Server 2008, x64 and Windows Server 2008 R2, x64.
If you decide to place the Oracle data files on Oracle Cluster File System for Windows (OCFS for Windows), then use the following guidelines when deciding where to place them:
You can choose either a single cluster file system or multiple cluster file systems to store the data files:
To use a single cluster file system, choose a cluster file system on a physical device that is dedicated to the database.
For best performance and reliability, choose a RAID device or a logical volume on multiple physical devices and implement the stripe-and-mirror-everything methodology, also known as SAME.
To use multiple cluster file systems, choose cluster file systems on separate physical devices or partitions that are dedicated to the database.
This method enables you to distribute physical I/O and create separate control files on different devices for increased reliability. It also enables you to fully implement Oracle Optimal Flexible Architecture (OFA) guidelines. To implement this method, you must choose the Advanced database creation option in OUI.
If you intend to create a preconfigured database during the installation, then the cluster file systems that you choose must have at least 4 gigabyte (GB) of free disk space.
For production databases, you must estimate the disk space requirement based on how you use the database.
For optimum performance, the cluster file systems that you choose should be on physical devices that are used by only the database.
Note:
You must not create a New Technology File System (NTFS) partition on a disk that you are using for OCFS for Windows.OCFS for Windows does not support network access through NFS or Windows Network Shares.
You must choose a location for the Oracle Database recovery files before installation only if you intend to enable automated backups during installation.
If you choose to place the recovery files on a cluster file system, then use the following guidelines when deciding where to place them:
To prevent disk failure from making the database files and the recovery files unavailable, place the recovery files on a cluster file system that is on a different physical disk from the database files.
Note:
Alternatively use an Oracle ASM disk group with a normal or high redundancy level for either or both file types, or use external redundancy.The cluster file system that you choose should have at least 3 GB of free disk space.
The disk space requirement is the default disk quota configured for the fast recovery area (specified by the DB_RECOVERY_FILE_DEST_SIZE
initialization parameter).
If you choose the Advanced database configuration option, then you can specify a different disk quota value. After you create the database, you can also use Oracle Enterprise Manager to specify a different value.
See Also:
Oracle Database Backup and Recovery Basics for more information about sizing the fast recovery area.Both Oracle Clusterware and the Oracle RAC database use files that must be available to all the nodes in the cluster. These files must be placed on a supported type of shared storage.
Review the following topics when deciding which type of shared storage to use during installation of Oracle Grid Infrastructure and Oracle RAC:
You cannot install the Oracle Grid Infrastructure software on a cluster file system. The Oracle Grid Infrastructure home (Grid home) must be on a local, NTFS formatted disk.
There are two ways of storing the shared Oracle Clusterware files:
Oracle ASM: You can install Oracle Clusterware files (OCR and voting disks) in Oracle ASM disk groups.
Oracle ASM is the required database storage option for Typical installations, and for Standard Edition Oracle RAC installations. It is an integrated, high-performance database file system and disk manager for Oracle Clusterware and Oracle Database files. It performs striping and mirroring of database files automatically.
Note:
You can no longer use OUI to install Oracle Clusterware or Oracle Database files directly on raw devices.
Only one Oracle ASM instance is permitted for each node regardless of the number of database instances on the node.
To store the Oracle Clusterware files in an Oracle ASM disk group, the disk group compatibility must be at least 11.2. See Oracle Automatic Storage Management Administrator's Guide for more information about disk group compatibility.
OCFS for Windows: OCFS for Windows is the only other supported cluster file system that you can use to storage of Oracle Clusterware and Oracle RAC files on Microsoft Windows platforms. OCFS for Windows is not the same as OCFS2, which is available on Linux platforms.
OCFS for Windows is included in the installation media for Oracle Grid Infrastructure and Oracle RAC on Microsoft Windows platforms and is installed automatically with Oracle Clusterware. However, for new installations, Oracle recommends that you use Oracle ASM to store the OCR and voting disk files. Oracle does not recommend using OCFS for Windows for Oracle Clusterware files.
Note:
You cannot put Oracle Clusterware files on Oracle Automatic Storage Management Cluster File System (Oracle ACFS). You cannot install Oracle Grid Infrastructure on a cluster file system.See Also:
The Certify page on My Oracle Support for supported cluster file systems. See Section 2.4, "Checking Hardware and Software Certification on My Oracle Support".Table 3-1, "Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Home Directories" shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.
Note:
For the most up-to-date information about supported storage options for Oracle Clusterware and Oracle RAC installations, refer to the Certify pages on the My Oracle Support Web site. See Section 2.4, "Checking Hardware and Software Certification on My Oracle Support".There are several ways of storing Oracle Database (Oracle RAC) files that must be shared across all the nodes:
Oracle ASM: You can create Oracle RAC data files and recovery files in Oracle ASM disk groups.
Oracle ASM is the required database storage option for Typical installations, and for Standard Edition Oracle RAC installations.
A supported shared file system: Supported file systems include the following:
OCFS for Windows: OCFS for Windows is a cluster file system used to store Oracle Database binary and data files. If you intend to use OCFS for Windows for your database storage, then you should create partitions large enough for all the database and recovery files when you create the unformatted disk partitions that are used by OCFS for Windows.
Oracle ACFS: Oracle ACFS provides a general purpose file system that can store administrative files as well as external general purpose data files. You can install the Oracle Database software on Oracle ACFS.
Note:
You cannot put Oracle Clusterware or Oracle Database data files on Oracle ACFS.See Also:
The Certify page on My Oracle Support for supported cluster file systems. See Section 2.4, "Checking Hardware and Software Certification on My Oracle Support".Network File System (NFS) with Oracle Direct NFS client: You can configure Oracle RAC to access NFS V3 servers directly using an Oracle internal Direct NFS client.
Note:
You cannot use Direct NFS to store Oracle Clusterware files. You can only use Direct NFS to store Oracle Database files. To install Oracle RAC on Windows using Direct NFS, you must have access to a shared storage method other than NFS for the Oracle Clusterware files.See Also:
Section 3.8.1, "About Direct NFS Storage" for more information on using Direct NFSTable 3-1 shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.
Table 3-1 Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Home Directories
Note:
For the most up-to-date information about supported storage options for Oracle Clusterware and Oracle RAC installations, refer to the Certify pages on the My Oracle Support Web site. See Section 2.4, "Checking Hardware and Software Certification on My Oracle Support".Each supported file system type has additional requirements that must be met to support Oracle Clusterware and Oracle RAC. Use the following sections to help you select your storage option:
Requirements for Using a Cluster File System for Oracle Clusterware Files
Identifying Storage Requirements for Using Oracle ASM for Shared Storage
To use OCFS for Windows for Oracle Clusterware files, you must comply with the following requirements:
If you choose to place your OCR files on a shared file system, then Oracle recommends that one of the following is true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy)
At least two file systems are mounted, and you use the features of Oracle Clusterware 11g release 2 (11.2) to provide redundancy for the OCR and voting disks
If you use a RAID device to store the Oracle Clusterware files, then you must have a partition that has at least 280 megabyte (MB) of available space for the OCR and 280 MB for each voting disk.
If you use the redundancy features of Oracle Clusterware to provide high availability for the OCR and voting disk files, then you need a minimum of three file systems, and each one must have 560 MB of available space for the OCR and voting disk.
Note:
The smallest partition size that OCFS for Windows can use is 500 MBFor example, to store all OCR and voting disk files on a cluster file system that does not provide redundancy at the hardware level (external redundancy), you should have approximately 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and voting disk files, one on each volume). If you use a file system that provides data redundancy, then you need only one physical disk with 280 MB of available space to store the OCR and 560 MB of available space for each voting disk file.
Note:
If you are upgrading from an earlier release of Oracle Clusterware, and your existing cluster uses 100 MB disk partitions for the OCR and 20 MB disk partitions for the voting disk, then you must extend these partitions to at least 300 MB. Oracle recommends that you do not use partitions, but instead place the OCR and voting disks in Oracle ASM disk groups that are marked as QUORUM disk groups.If the existing OCR and voting disk files are 280 MB or larger, then you do not have to change the size of the OCR or voting disks before performing the upgrade.
All storage products must be supported by both your server and storage vendors.
To identify the storage requirements for using Oracle ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:
Tip:
As you progress through the following steps, make a list of the raw device names you intend to use to create the Oracle ASM disk groups and have this information available during the Oracle Grid Infrastructure installation or when creating your Oracle RAC database.Determine whether you want to use Oracle ASM for Oracle Clusterware files (OCR and voting disks), Oracle Database data files, recovery files, or all file types.
Note:
You do not have to use the same storage mechanism for data files and recovery files. You can store one type of file in a cluster file system while storing the other file type within Oracle ASM. If you plan to use Oracle ASM for both data files and recovery files, then you should create separate Oracle ASM disk groups for the data files and the recovery files.
All the OCR files or voting disk files must be located on either Oracle ASM or a cluster file system. You cannot have some files of one type in Oracle ASM and other Oracle Clusterware files in a cluster file system. You can have OCR stored on Oracle ASM and voting disk files stored on a cluster file system, but all files of each type must use the same storage type.
If you plan to enable automated backups for your Oracle RAC database, then you must place the fast recovery area on shared storage. You can choose Oracle ASM as the shared storage mechanism for recovery files by specifying an Oracle ASM disk group for the fast recovery area. Depending how you choose to create a database during the installation, you have the following options:
If you created the Oracle ASM disk groups prior to performing the installation, and then select an installation method that runs DBCA in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to use the same Oracle ASM disk group for data files and recovery files. You can also choose to use different disk groups for each file type. Ideally, you should create separate Oracle ASM disk groups for data files and recovery files.
The same choice is available to you if you use DBCA after the installation to create a database.
If you select an installation type that runs DBCA in non-interactive mode, then you must use the same Oracle ASM disk group for data files and recovery files. The Oracle ASM disk group you select must have been created prior to starting the installation or DBCA.
Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. The redundancy levels are as follows:
External redundancy
An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.
Because Oracle ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.
Even if you select external redundancy, you must have at least three voting disks configured, as each voting disk is an independent entity, and cannot be mirrored.
Normal redundancy
A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.
For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices and provides three voting disk files, one OCR, and two OCR copies (one primary and one secondary mirror). When using a normal redundancy disk group, the cluster can survive the loss of one failure group.
For most installations, Oracle recommends that you select normal redundancy disk groups.
High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.
For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices and provides five voting disk files, one OCR, and three copies of the OCR (one primary and two secondary mirrors). With high redundancy, the cluster can survive the loss of two failure groups.
While high redundancy disk groups provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to use this redundancy level.
Note:
After a disk group is created, you cannot alter the redundancy level of the disk group.Determine the total amount of disk space that you require for the Oracle Clusterware files.
Use Table 3-2 to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware using Oracle ASM for shared storage:
Table 3-2 Oracle Clusterware Disk Space for Oracle ASM by Redundancy Type
File Types Stored | Minimum Number of Disks | Disk or Disk Partition Sizes |
---|---|---|
OCR and voting disks in an external redundancy disk group |
1 |
At least 300 MB for each voting disk file and 300 MB for each OCR |
OCR and voting disks in a normal redundancy disk group |
3 |
At least 600 MB for the OCR and its copies, at least 900 MB for the voting disk files, or at least 1.5 GB for both files types in one disk group. Note: If you create a disk group during installation, then it must be at least 2 GB in size. |
OCR and voting disks in a high redundancy disk group |
5 |
At least 900 MB for the OCR and its copies, at least 1.5 GB for the voting disk files, or at least 2.4 GB for both files types in one disk group. |
Note:
If the voting disk files are in a disk group, then note that disk groups that contain Oracle Clusterware files (OCR and voting disk files) have a higher minimum number of failure groups than other disk groups.If you create a disk group for the OCR and voting disk files as part of the installation, then the installer requires that you create these files on a disk group with at least 2 GB of available space.
A quorum failure group is a special type of failure group. Disks in quorum failure groups do not contain user data. A quorum failure group is not considered when determining redundancy requirements in respect to storing user data. However, a quorum failure group counts when mounting a disk group.
To ensure high availability of Oracle Clusterware files on Oracle ASM, you must have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.
Determine the total amount of disk space that you require for the Oracle database files and recovery files.
Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:
Table 3-3 Total Oracle Database Storage Space Required by Redundancy Type
Redundancy Level | Minimum Number of Disks | Database Files | Recovery Files | Both File Types |
---|---|---|---|---|
External |
1 |
1.5 GB |
3 GB |
4.5 GB |
Normal |
2 |
3 GB |
6 GB |
9 GB |
High |
3 |
4.5 GB |
9 GB |
13.5 GB |
Note:
The file sizes listed in the previous table are estimates of minimum requirements for a new installation (or a database without any user data). The file sizes for your database will be larger.Determine if you can use an existing disk group.
If an Oracle ASM instance currently exists on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.
See Section 3.3.2.1, "Using an Existing Oracle ASM Disk Group" for more information about using an existing disk group.
Optionally, identify failure groups for the Oracle ASM disk group devices.
Note:
You only have to complete this step if you plan to use an installation method that includes configuring Oracle ASM disk groups before installing Oracle RAC, or creating an Oracle RAC database.If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. Failure groups define Oracle ASM disks that share a common potential failure mechanism. By default, each device comprises its own failure group. If you choose to define custom failure groups, then note the following:
You must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.
If the disk group contains data files and Oracle Clusterware files, including the voting disk files, then you must specify a minimum of three failure groups for normal redundancy disk groups and five failure groups for high redundancy disk groups.
Disk groups containing voting disk files must have at least three failure groups for normal redundancy or at least five failure groups for high redundancy. If the disk group does not contain the voting disk files, then the minimum number of required failure groups is two for normal redundancy and three for high redundancy. The minimum number of failure groups applies whether or not they are custom failure groups.
If two disk devices in a normal redundancy disk group are attached to the same small computer system interface (SCSI) controller, then the disk group becomes unavailable if the controller fails. The SCSI controller in this example is a single point of failure. To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration enables the disk group to tolerate the failure of one SCSI controller.
Note:
You can define custom failure groups after installation of Oracle Grid Infrastructure using the GUI tool ASMCA, the command-line toolasmcmd
, or SQL*Plus commands.For more information about Oracle ASM failure groups, refer to Oracle Automatic Storage Management Administrator's Guide.
If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.
Nonshared logical partitions are not supported with Oracle RAC. If you want to use logical partitions for your Oracle RAC database, then you must use shared logical volumes created by a logical volume manager such as diskpart.msc
.
To use Oracle ASM as the storage option for either database or recovery files, you must use an existing Oracle ASM disk group, or use ASMCA to create the necessary disk groups before installing Oracle Database 11g release 2 and creating an Oracle RAC database.
To determine if an Oracle ASM disk group currently exists, or to determine whether there is sufficient disk space in an existing disk group, you can use Oracle Enterprise Manager, either Grid Control or Database Control. Alternatively, you can use the following procedure:
In the Services Control Panel, ensure that the OracleASMService+ASM
n
service, where n
is the node number, has started.
Open a Windows command prompt and temporarily set the ORACLE_SID
environment variable to specify the appropriate value for the Oracle ASM instance to use.
For example, if the Oracle ASM system identifier (SID) is named +ASM1
, then enter a setting similar to the following:
C:\> set ORACLE_SID=+ASM1
If the ORACLE_HOME
environment variable is not set to the Grid home, then temporarily set this variable to the location of the Grid home using a command similar to the following:
C:\> set ORACLE_HOME=C:app11.2.0„rid
Use ASMCMD to connect to the Oracle ASM instance and start the instance if necessary with a command similar to the following:
C:\> %ORACLE_HOME%\bin\asmcmd
ASMCMD> startup
Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each disk group:
ASMCMD> lsdg
or:
C:\> %ORACLE_HOME%\bin\asmcmd -p lsdg
From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.
If necessary, install, or identify the additional disk devices required to meet the storage requirements listed in the previous section.
Be aware of the following restrictions when configuring disk partitions for use with Oracle ASM:
You cannot use primary partitions for storing Oracle Clusterware files while running OUI to install Oracle Clusterware as described in Chapter 4, "Installing Oracle Grid Infrastructure for a Cluster". You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.
With x64 Windows, you can create up to 128 primary partitions for each disk.
You can create shared directories only on primary partitions and logical drives.
Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.
For these reasons, you might prefer to use extended partitions for storing Oracle software files and not primary partitions.
To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the file system must comply with the following requirements:
To use a cluster file system, it must be a supported cluster file system, as listed in the section Section 3.2, "Supported Storage Options for Oracle Grid Infrastructure and Oracle RAC".
To use NFS, it must be on a certified network attached storage (NAS) device. Access the My Oracle Support Web site as described in Section 2.4, "Checking Hardware and Software Certification on My Oracle Support" to find a list of certified NAS devices.
If you choose to place your OCR files on a shared file system, then Oracle recommends that one of the following is true:
If you choose to place the Oracle RAC data files on a shared file system, then one of the following should be true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device).
The file systems consist of at least two independent file systems, with the data files on one file system, and the recovery files on a different file system.
The user account with which you perform the installation (oracle
or grid
) must have write permissions to create the files in the path that you specify for the shared storage.
Note:
If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disk partitions, then you must extend these partitions to at least 300 MB. Oracle recommends that you do not use partitions, but instead place OCR and voting disks in Oracle ASM disk groups marked as QUORUM disk groups.
All storage products must be supported by both your server and storage vendors.
Use Table 3-4 and Table 3-5 to determine the minimum size for shared file systems:
Table 3-4 Oracle Clusterware Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Voting disks with external redundancy |
3 |
At least 300 MB for each voting disk volume |
OCR with external redundancy |
1 |
At least 300 MB for each OCR volume |
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software |
1 |
At least 300 MB for each OCR volume At least 300 MB for each voting disk volume |
Table 3-5 Oracle RAC Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Database data files |
1 |
At least 1.5 GB for each volume |
Note: Recovery files must be on a different volume than database files |
1 |
At least 2 GB for each volume |
In Table 3-4 and Table 3-5, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and two OCR mirrors, and one voting disk on each volume). You should have a minimum of three physical disks, each at least 500 MB, to ensure that voting disks and OCR files are on separate physical disks. If you also use this shared storage for Oracle RAC, using one volume for data files and one volume for recovery files, then you should have at least 3.5 GB available storage over two volumes, and at least 5.5 GB available total for all volumes.
If you use OCFS for Windows or Oracle ASM for your database files, then your database is created by default with files managed by Oracle Database. When using the Oracle Managed files feature, you need specify only the database object name instead of file names when creating or deleting database files.
Configuration procedures are required to enable Oracle Managed Files.
See Also:
"Using Oracle-Managed Files" in Oracle Database Administrator's GuideWhen you have determined your disk storage options, first perform the steps listed in the section Section 3.5, "Preliminary Shared Disk Preparation", then configure the shared storage:
To use Oracle ASM, refer to Section 3.6, "Configuring Shared Storage for Oracle ASM".
To use a file system, refer to Section 3.7, "Configuring Storage for Oracle Database Files on OCFS for Windows".
Complete the following steps to prepare shared disks for storage:
You must disable write caching on all disks that will be used to share data between the nodes in your cluster. Perform these steps to disable write caching:
Click Start, then select Control Panel, then Administrative Tools, then Computer Management, then Device Manager, and then Disk drives
Expand the Disk drives and double-click the first drive listed.
Under the Policies tab for the selected drive, uncheck the option that enables write caching.
Double-click each of the other drives that will be used by Oracle Clusterware and Oracle RAC and disable write caching as described in the previous step.
Caution:
Any disks that you use to store files, including database files, that will be shared between nodes, must have write caching disabled.If you are using Windows Server 2003 R2 Enterprise Edition or Datacenter Edition, then you must enable disk automounting, as it is disabled by default. For other Windows releases, even though the automount feature is enabled by default, you should verify that automount is enabled.
You must enable automounting when using:
Raw partitions for Oracle RAC
OCFS for Windows
Oracle Clusterware
Raw partitions for single-node database installations
Logical drives for Oracle ASM
Note:
Raw partitions are supported only when upgrading an existing installation using the configured partitions. On new installations, using raw partitions is not supported by ASMCA or OUI, but is supported by the software if you perform manual configurationIf you upgrade the operating system from one version of Windows to another (for example, Windows Server 2003 to Windows Advanced Server 2003), then you must repeat this procedure after the upgrade is finished.
To determine if automatic mounting of new volumes is enabled, use the following commands:
c:\> diskpart DISKPART> automount Automatic mounting of new volumes disabled.
To enable automounting:
Enter the following commands at a command prompt:
c:\> diskpart DISKPART> automount enable Automatic mounting of new volumes enabled.
Type exit
to end the diskpart
session
Repeat steps 1 and 2 for each node in the cluster.
When you have prepared all of the cluster nodes in your Windows Server 2003 R2 system as described in the previous steps, restart all of the nodes.
Note:
All nodes in the cluster must have automatic mounting enabled to correctly install Oracle RAC and Oracle Clusterware. Oracle recommends that you enable automatic mounting before creating any logical partitions for use by the database, Oracle ASM, or OCFS for Windows.You must restart each node after enabling disk automounting. After it is enabled and the node is restarted, automatic mounting remains active until it is disabled.
The installer does not suggest a default location for the OCR or the voting disk. If you choose to create these files on Oracle ASM, then you must first create and configure disk partitions to be used in the Oracle ASM disk group.
The following sections describe how to create and configure disk partitions to be used by Oracle ASM for storing Oracle Clusterware files or Oracle Database data files:
To use direct-attached storage (DAS) or storage area network (SAN) disks for Oracle ASM, each disk must have a partition table. Oracle recommends creating exactly one partition for each disk that encompasses the entire disk.
Note:
You can use any physical disk for Oracle ASM, if it is partitioned. However, you cannot use NAS or Microsoft dynamic disks.Use Microsoft Computer Management utility or the command line tool diskpart
to create the partitions. Ensure that you create the partitions without drive letters. After you have created the partitions, the disks can be configured.
See Also:
Oracle Database Installation Guide for Microsoft Windows for information about creating DAS or SAN disk partitions
Section 1.2.7, "Prepare Disk Partitions" for more information about using diskpart
to create a partition
The only partitions that OUI displays for Windows systems are logical drives that are on disks that do not contain a primary partition, and have been marked (or stamped) with asmtool
. Configure the disks before installation either by using asmtoolg
(graphical user interface (GUI) version) or using asmtool
(command line version). You also have the option of using the asmtoolg
utility during Oracle Grid Infrastructure for a cluster installation.
The asmtoolg
and asmtool
utilities only work on partitioned disks; you cannot use Oracle ASM on unpartitioned disks. You can also use these tools to reconfigure the disks after installation. These utilities are installed automatically as part of Oracle Grid Infrastructure.
The following section describes the asmtoolg
and asmtool
functions and commands.
The asmtoolg
and asmtool
tools associate meaningful, persistent names with disks to facilitate using those disks with Oracle ASM. Oracle ASM uses disk strings to operate more easily on groups of disks at the same time. The names that asmtoolg
or asmtool
create make this easier than using Windows drive letters.
All disk names created by asmtoolg
or asmtool
begin with the prefix ORCLDISK
followed by a user-defined prefix (the default is DATA
), and by a disk number for identification purposes. You can use them as raw devices in the Oracle ASM instance by specifying a name \\.\ORCLDISK
prefixn
, where prefix
either can be DATA
, or a value you supply, and where n
represents the disk number.
To configure your disks with asmtoolg
, see one of the following sections:
Use asmtoolg
(GUI version) to create device names; use asmtoolg
to add, change, delete, and examine the devices available for use in Oracle ASM.
To add or change disk stamps:
In the installation media for Oracle Grid Infrastructure, go the asmtool
folder and double-click asmtoolg
.
If Oracle Clusterware is installed, then go to the Grid_home
\
bin
folder and double-click asmtoolg.exe
.
On Windows Server 2008 and Windows Server 2008 R2, if user access control (UAC) is enabled, then you must create a desktop shortcut to a command window. Open the command window using the Run as Administrator, right-click the context menu, and launch asmtoolg
.
Select the Add or change label option, and then click Next.
asmtoolg
shows the devices available on the system. Unrecognized disks have a status of "Candidate device", stamped disks have a status of "Stamped ASM device," and disks that have had their stamp deleted have a status of "Unstamped ASM device." The tool also shows disks that are recognized by Windows as a file system (such as NTFS). These disks are not available for use as Oracle ASM disks, and cannot be selected. In addition, Microsoft Dynamic disks are not available for use as Oracle ASM disks.
If necessary, follow the steps under Section 1.2.7, "Prepare Disk Partitions" to create disk partitions for the Oracle ASM instance.
On the Stamp Disks window, select the disks to you want to use with Oracle ASM.
For ease of use, Oracle ASM can generate unique stamps for all of the devices selected for a given prefix. The stamps are generated by concatenating a number with the prefix specified. For example, if the prefix is DATA
, then the first Oracle ASM link name is ORCLDISKDATA0
.
You can also specify the stamps of individual devices.
Optionally, select a disk to edit the individual stamp (Oracle ASM link name).
Click Next.
Click Finish.
To delete disk stamps:
Select the Delete labels option, then click Next.
The delete option is only available if disks exist with stamps. The delete screen shows all stamped Oracle ASM disks.
On the Delete Stamps screen, select the disks to unstamp.
Click Next.
Click Finish.
asmtool
is a command-line interface for marking (or stamping) disks to be used with Oracle ASM. It has the following options:
Option | Description | Example |
---|---|---|
-add |
Adds or changes stamps. You must specify the hard disk, partition, and new stamp name. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the -force option.
If necessary, follow the steps under Section 1.2.7, "Prepare Disk Partitions"to create disk partitions for the Oracle ASM instance. |
asmtool -add [-force] \Device\Harddisk1\Partition1 ORCLDISKASM0 \Device\Harddisk2\Partition1 ORCLDISKASM2... |
-addprefix |
Adds or changes stamps using a common prefix to generate stamps automatically. The stamps are generated by concatenating a number with the prefix specified. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the -force option. |
asmtool -addprefix ORCLDISKASM [-force] \Device\Harddisk1\Partition1 \Device\Harddisk2\Partition1... |
-create |
Creates an Oracle ASM disk device from a file instead of a partition.
Note: Usage of this command is not supported for production environments. |
asmtool -create \\server\share\file 1000 asmtool -create D:\asm\asmfile02.asm 240 |
-list |
List available disks. The stamp, windows device name, and disk size in MB are shown. |
asmtool -list |
-delete |
Removes existing stamps from disks. |
asmtool -delete ORCLDISKASM0 ORCLDISKASM1... |
Note:
If you use-add
, -addprefix
, or -delete
, asmtool
notifies the Oracle ASM instance on the local node and on other nodes in the cluster, if available, to rescan the available disks.To use OCFS for Windows for your Oracle home and data files, the following partitions, at a minimum, must exist before you run OUI to install Oracle Clusterware:
5.5 GB or larger partition for the Oracle home, if you want a shared Oracle home
3 GB or larger partitions for the Oracle Database data files and recovery files
Log in to Windows using a member of the Administrators group and perform the steps described in this section to set up the shared disk raw partitions for OCFS for Windows. Windows refers to raw partitions as logical drives. If you need more information about creating partitions, then refer to the Windows online help from within the Disk Management utility.
Run the Windows Disk Management utility from one node to create an extended partition. Use a basic disk; dynamic disks are not supported.
Create a partition for the Oracle Database data files and recovery files, and optionally create a second partition for the Oracle home.
The number of partitions used for OCFS for Windows affects performance. Therefore, you should create the minimum number of partitions needed for the OCFS for Windows option you choose.
Note:
Oracle supports installing the database into multiple Oracle Homes on a single system. This allows flexibility in deployment and maintenance of the database software. For example, it enables different versions of the database to run simultaneously on the same system, or it enables you to upgrade specific database or Oracle Automatic Storage Management instances on a system without affecting other running databases.
However, when you have installed multiple Oracle Homes on a single system, there is also some added complexity introduced that you may have to consider allowing these Oracle Homes to coexist. For more information on this topic, refer to Oracle Database Platform Guide for Microsoft Windows and Oracle Real Application Clusters Installation Guide for Microsoft Windows x64 (64-Bit)
To create the required partitions, perform the following steps:
From an existing node in the cluster, run the DiskPart utility as follows:
C:\> diskpart DISKPART>
List the available disks. By specifying its disk number (n
), select the disk on which you want to create a partition.
DISKPART> list disk
DISKPART> select disk n
Create an extended partition:
DISKPART> create part ext
Create a logical drive of the desired size after the extended partition is created using the following syntax:
DISKPART> create part log [size=n] [offset=n] [noerr]
Repeat steps 2 through 4 for the second and any additional partitions. An optimal configuration is one partition for the Oracle home and one partition for Oracle Database files.
List the available volumes, and remove any drive letters from the logical drives you plan to use.
DISKPART> list volume
DISKPART> select volume n
DISKPART> remove
If you are preparing drives on a Windows 2003 R2 system, then you should restart all nodes in the cluster after you have created the logical drives.
Check all nodes in the cluster to ensure that the partitions are visible on all the nodes and to ensure that none of the Oracle partitions have drive letters assigned. If any partitions have drive letters assigned, then remove them by performing these steps:
Right-click the partition in the Windows Disk Management utility
Select "Change Drive Letters and Paths..." from the menu
Click Remove in the "Change Drive Letter and Paths" window
If you installed Oracle Grid Infrastructure, and you want to use OCFS for Windows for storage for Oracle RAC, then run the ocfsformat.exe
command from the Grid_home
\cfs
directory using the following syntax:
Grid_home\cfs\OcfsFormat /m link_name /c ClusterSize_in_KB /v volume_label /f /a
Where:
/m
link_name
is the mountpoint for this file system which you want to format with OCFS for Windows. On Windows, provide a drive letter corresponding to the logical drive.
ClusterSize_in_KB
is the Cluster size or allocation size for the OCFS for Windows volume (this option must be used with the /a
option or else the default size of 4 kilobytes (KB) is used)
volume_label
is an optional volume label
The /f
option forces the format of the specified volume
The /a
option, if specified, forces OcfsFormat
to use the clustersize specified with the /c
option
For example, to create an OCFS for Windows formatted shared disk partition named DATA, mounted as U:
, using a shared disk with a nondefault cluster size of 1 MB, you would use the following command:
ocfsformat /m U: /c 1024 /v DATA /f /a
This section contains the following information about Direct NFS:
Oracle Disk Manager (ODM) can manage NFS on its own. This is referred to as Direct NFS. Direct NFS implements NFS version 3 protocol within the Oracle Database kernel. This change enables monitoring of NFS status using the ODM interface. The Oracle Database kernel driver tunes itself to obtain optimal use of available resources.
Starting with Oracle Database 11g release 1 (11.1), you can configure Oracle Database to access NFS version 3 servers directly using Direct NFS. This enables the storage of data files on a supported NFS system.
Note:
Use NFS servers supported for Oracle RAC. Check My Oracle Support, as described in Section 2.4, "Checking Hardware and Software Certification on My Oracle Support" for support information.If Oracle Database cannot open an NFS server using Direct NFS, then an informational message is logged into the Oracle alert and trace files indicating that Direct NFS could not be established.
Note:
Direct NFS does not work if the backend NFS server does not support a write size (wtmax
) of 32768 or larger.The Oracle files resident on the NFS server that are served by the Direct NFS Client can also be accessed through a third party NFS client. Management of Oracle data files created with Direct NFS should be done according to the guidelines specified in the "Managing Datafiles and Tempfiles" chapter of Oracle Database Administrator's Guide.
If you use Direct NFS, then you must create a configuration file, oranfstab
, to specify the options, attributes, and parameters that enable Oracle Database to use Direct NFS. Direct NFS looks for the mount point entries in Oracle_home
\database
\oranfstab
. It uses the first matched entry as the mount point. You must create the oranfstab
file in the Oracle_home
\database
directory.
When the oranfstab
file is placed in Oracle_home
\database
, the entries in the file are specific to a single database. For Oracle RAC installations in a shared Oracle home, the oranfstab file is globally available to all database instances. All instances that use the shared Oracle home use the same Oracle_home
\database
\oranfstab
file. For a nonshared Oracle home, because all the Oracle RAC instances use the same oranfstab
file, you must replicate the oranfstab
file on all of the nodes. Also, you must keep the oranfstab
file synchronized on all the nodes.
Note:
If you remove an NFS path fromoranfstab
that Oracle Database is using, then you must restart the database for the change to be effective. In addition, the mount point that you use for the file system must be identical on each node.See Also:
Section 3.8.5, "Enabling the Direct NFS Client" for more information about creating theoranfstab
fileDirect NFS determines mount point settings for NFS storage devices based on the configuration information in oranfstab
. If Oracle Database cannot open an NFS server using Direct NFS, then an error message is written into the Oracle alert and trace files indicating that Direct NFS could not be established.
See Also:
Section 3.2.2, "Supported Storage Options for Oracle RAC" for a list of the file types that are supported with Direct NFSDirect NFS can use up to four network paths defined in the oranfstab
file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS re-issues all outstanding requests over any remaining paths.
Note:
You can have only one active Direct NFS implementation for each instance. Using Direct NFS on an instance prevents the use of another Direct NFS implementation.Use the following global dynamic performance views for managing Direct NFS usage with your Oracle RAC database:
GV$DNFS_SERVERS
: Lists the servers that are accessed using Direct NFS.
GV$DNFS_FILES
: Lists the files that are currently open using Direct NFS.
GV$DNFS_CHANNELS
: Shows the open network paths, or channels, to servers for which Direct NFS is providing files.
GV$DNFS_STATS
: Lists performance statistics for Direct NFS.
To enable the Direct NFS Client, you must add an oranfstab
file to Oracle_home
\database
. When oranfstab
is placed in this directory, the entries in this file are specific to one particular database. The Direct NFS Client searches for the mount point entries as they appear in oranfstab
. The Direct NFS Client uses the first matched entry as the mount point.
Complete the following procedure to enable the Direct NFS Client:
Create an oranfstab
file with the following attributes for each NFS server accessed by Direct NFS:
server
: The NFS server name.
path
: Up to four network paths to the NFS server, specified either by internet protocol (IP) address, or by name, as displayed using the ifconfig
command on the NFS server.
local
: Up to 4 network interfaces on the database host, specified by IP address, or by name, as displayed using the ipconfig
command on the database host.
export
: The exported path from the NFS server. Use a UNIX-style path.
mount
: The corresponding local mount point for the exported volume. Use Windows-style path.
mnt_timeout
: (Optional) Specifies the time (in seconds) for which Direct NFS client should wait for a successful mount before timing out. The default timeout is 10 minutes (600).
uid
: (Optional) The UNIX user ID to be used by Direct NFS to access all NFS servers listed in oranfstab
. The default value is uid:65534
, which corresponds to user:nobody
on the NFS server.
gid
: (Optional) The UNIX group ID to be used by Direct NFS to access all NFS servers listed in oranfstab
. The default value is gid:65534
, which corresponds to group:nogroup
on the NFS server.
The mount point specified in the oranfstab
file represents the local path where the database files would reside normally, as if Direct NFS was not used. For example, if the location for the data files if the database did not use Direct NFS would be C:\app\oracle\oradata\orcl
directory, then you specify C:\app\oracle\oradata\orcl
for the NFS virtual mount point in the corresponding oranfstab
file.
Example 3-1 and Example 3-2 provide examples of how the Direct NFS attributes can be used in an oranfstab
file.
Note:
Direct NFS ignores a uid
or gid
value of 0
.
The exported path from the NFS server must be accessible for read/write/execute
by the user with the uid
, gid
specified in oranfstab
. If neither uid
nor gid
is listed, then the exported path must be accessible by the user with uid:65534
and gid:65534
.
Replace the standard ODM library, oraodm11.dll
, with the ODM NFS library.
Oracle Database uses the ODM library, oranfsodm11.dll
, to enable Direct NFS. To replace the ODM library, complete the following steps:
Change directory to Oracle_home
\bin
.
Shut down the Oracle Database instance on a node using the Server Control Utility (SRVCTL).
Enter the following commands:
copy oraodm11.dll oraodm11.dll.orig copy /Y oranfsodm11.dll oraodm11.dll
Restart the Oracle Database instance using SRVCTL.
Repeat Step a to Step d for each node in the cluster.
Example 3-1 oranfstab File Using Local and Path NFS Server Entries
The following example of an oranfstab
file shows an NFS server entry, where the NFS server, MyDataServer1
, uses 2 network paths specified with IP addresses.
server: MyDataServer1 local: 132.34.35.10 path: 132.34.35.12 local: 132.34.55.10 path: 132.34.55.12 export: /vol/oradata1 mount: C:\APP\ORACLE\ORADATA\ORCL
Example 3-2 oranfstab File Using Network Connection Names
The following example of an oranfstab
file shows an NFS server entry, where the NFS server, MyDataServer2
, uses 4 network paths specified by the network interface to use, or the network connection name. Multiple export paths are also used in this example.
server: MyDataServer2 local: LocalInterface1 path: NfsPath1 local: LocalInterface2 path: NfsPath2 local: LocalInterface3 path: NfsPath3 local: LocalInterface4 path: NfsPath4 export: /vol/oradata2 mount: C:\APP\ORACLE\ORADATA\ORCL2 export: /vol/oradata3 mount: C:\APP\ORACLE\ORADATA\ORCL3
ORADNFS is a utility which enables the database administrators to perform basic file operations over Direct NFS Client on Microsoft Windows platforms.
ORADNFS is a multi-call binary, which is a single binary that acts like many utilities. You must be a member of the local ORA_DBA
group to use ORADNFS. To execute commands using ORADNFS you issue the command as an argument on the command line.
The following command prints a list of commands available with ORADNFS:
C:\> oradnfs help
To display the list of files in the NFS directory mounted as C:\ORACLE\ORADATA, use the following command:
C:\> oradnfs ls C:\ORACLE\ORADATA\ORCL
Note:
A valid copy of theoranfstab
configuration file must be present in Oracle_home
\database
for ORADNFS to operate.Use one of the following methods to disable the Direct NFS client:
Remove the oranfstab
file.
Restore the original oraodm11.dll
file by reversing the process you completed in Section 3.8.5, "Enabling the Direct NFS Client".
Remove the specific NFS server or export paths in the oranfstab
file.
If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use ASMCA to upgrade the existing Oracle ASM instance to Oracle ASM 11g release 2 (11.2). You can also use ASMCA to configure failure groups, Oracle ASM volumes and Oracle ACFS.
Note:
You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another Oracle ASM home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
If you are upgrading from Oracle ASM 11g release 2 (11.2.0.1) or later, then Oracle ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling upgrade, and ASMCA is started during the upgrade. ASMCA cannot perform a separate upgrade of Oracle ASM from release 11.2.0.1 to 11.2.0.2.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is Oracle ASM 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of the Oracle ASM instances for an Oracle RAC installation are from a release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to Oracle ASM 11g release 2 (11.2).
Oracle Grid Infrastructure includes Oracle Clusterware, Oracle ASM, Oracle ACFS, Oracle ADVM, and driver resources and software components, which are installed into the Grid home during installation with OUI. After an Oracle Grid Infrastructure installation, you can use Oracle ASM Configuration Assistant (ASMCA) to start the Oracle ASM instance and create Oracle ASM disk groups, Oracle ADVM volumes, and Oracle ACFS file systems (assuming Oracle Clusterware is operational). Alternatively, Oracle ASM disk groups and Oracle ADVM volumes can be created using SQL*Plus, ASMCMD command line tools, or Oracle Enterprise Manager. File systems can be created in Oracle ACFS using operating system command-line tools or Oracle Enterprise Manager.
Note:
Oracle ACFS is supported only on Windows Server 2003 x64 and Windows Server 2003 R2 x64 for Oracle Grid Infrastructure release 11.2.0.1.
Starting with Oracle Grid Infrastructure release 11.2.0.2, Oracle ACFS is also supported on Windows Server 2008 x64 and Windows Server 2008 R2 x64.
The compatibility parameters COMPATIBLE.ASM
and COMPATIBLE.ADVM
must be set to 11.2 or higher for the disk group to contain an Oracle ADVM volume.
If you want to create the Oracle home for your Oracle RAC database in Oracle ACFS, then perform the following steps:
Install Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle ASM).
Go to the bin
directory in the Grid home, for example:
C:\> cd app\11.2.0\grid\bin
Start ASMCA as the Local Administrator user, for example:
C:\..\bin> asmca
The Configure ASM: Disk Groups page is displayed.
On the Configure ASM: Disk Groups page, right-click the disk group in which you want to create the Oracle ADVM volume, then select Create ACFS for Database Home.
In the Create ACFS Hosted Database Home window, enter the following information:
Database Home Volume Name: Enter the name of the database home. The name must be unique in your enterprise, for example: racdb_01
Database Home Mountpoint: Enter the directory path or logical drive letter for the mountpoint. For example: M:\acfsdisks\racdb_01
Make a note of this mountpoint for future reference.
Database Home Size (GB): Enter in gigabytes the size you want the database home to be.
Click OK when you have entered the required information.
If prompted, run any scripts as directed by ASMCA as the Local Administrator user.
During Oracle RAC 11g release 2 (11.2) installation, ensure that you or the database administrator who installs Oracle RAC selects for the Oracle home the mountpoint you provided in the Database Home Mountpoint field (in the preceding example, this was M:\acfsdisks\racdb_01
).
Note:
You cannot place the Oracle home directory for Oracle Database 11g release 1 or earlier releases on Oracle ACFS.See Also:
Oracle Automatic Storage Management Administrator's Guide for more information about configuring and managing your storage with Oracle ACFSStarting with Oracle Database 11g release 2 (11.2) and Oracle RAC 11g release 2 (11.2), using DBCA or OUI to store Oracle Clusterware or Oracle Database files on raw devices is not supported.
If you upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you can use an existing raw device partition, and perform a rolling upgrade of your existing installation. Performing a new installation using raw devices is not allowed.