This appendix provides instructions to complete configuration tasks manually that Cluster Verification Utility (CVU) and Oracle Universal Installer (OUI) normally complete during installation using Fixup scripts. Use this appendix as a guide if you cannot use Fixup scripts.
This appendix contains the following information:
Passwordless SSH configuration is a mandatory installation requirement. SSH is used during installation to configure cluster member nodes, and SSH is used after installation by configuration assistants, Oracle Enterprise Manager, Opatch, and other features.
Automatic Passwordless SSH configuration using OUI creates RSA encryption keys on all nodes of the cluster. If you have system restrictions that require you to set up SSH manually, such as using DSA keys, then use this procedure as a guide to set up passwordless SSH.
In the examples that follow, the Oracle software owner listed is the grid
user.
If SSH is not available, then OUI attempts to use rsh and rcp instead. However, these services are disabled by default on most Linux systems.
This section contains the following:
To determine if SSH is running, enter the following command:
$ pgrep sshd
If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the installation software owner (grid
, oracle
), use the command ls -al
to ensure that the .ssh
directory is owned and writable only by the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA. The instructions that follow are for SSH1. If you have an SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA.
To configure SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node. Note that the SSH files must be readable only by root
and by the software installation user (oracle
, grid
), as SSH ignores a private key file if it is accessible by others. In the examples that follow, the DSA key is used.
You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.
To configure SSH, complete the following:
Complete the following steps on each node:
Log in as the software owner (in this example, the grid
user).
To ensure that you are logged in as grid
, and to verify that the user ID matches the expected user ID you have assigned to the grid
user, enter the commands id
and id grid
. Ensure that Oracle user group and user and the user terminal window process you are using have group and user IDs are identical. For example:
$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall) 1100(grid,asmadmin,asmdba) $ id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall), 1100(grid,asmadmin,asmdba)
If necessary, create the .ssh
directory in the grid
user's home directory, and set permissions on it to ensure that only the oracle
user has read and write permissions:
$ mkdir ~/.ssh $ chmod 700 ~/.ssh
Enter the following command:
$ /usr/bin/ssh-keygen -t dsa
At the prompts, accept the default location for the key file (press Enter).
Note:
SSH with passphrase is not supported for Oracle Clusterware 11g Release 2 and later releases.This command writes the DSA public key to the ~/.ssh/id_dsa.pub
file and the private key to the ~/.ssh/id_dsa
file.
Never distribute the private key to anyone not authorized to perform Oracle software installations.
Repeat steps 1 through 4 on each node that you intend to make a member of the cluster, using the DSA key.
Complete the following steps:
On the local node, change directories to the .ssh
directory in the Oracle Grid Infrastructure owner's home directory (typically, either grid
or oracle
).
Then, add the DSA key to the authorized_keys
file using the following commands:
$ cat id_dsa.pub >> authorized_keys $ ls
In the SSH directory, you should see the id_dsa.pub
keys that you have created, and the file authorized_keys
.
On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys
file to the oracle
user .ssh
directory on a remote node. The following example is with SCP, on a node called node2, with the Oracle Grid Infrastructure owner grid
, where the grid
user path is /home/grid
:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
You are prompted to accept a DSA key. Enter Yes, and you see that the node you are copying to is added to the known_hosts
file.
When prompted, provide the password for the Grid user, which should be the same on all nodes in the cluster. The authorized_keys
file is copied to the remote node.
Your output should be similar to the following, where xxx
represents parts of a valid IP address:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/ The authenticity of host 'node2 (xxx.xxx.173.152) can't be established. DSA key fingerprint is 7e:60:60:ae:40:40:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,xxx.xxx.173.152' (dsa) to the list of known hosts grid@node2's password: authorized_keys 100% 828 7.5MB/s 00:00
Using SSH, log in to the node where you copied the authorized_keys
file. Then change to the .ssh
directory, and using the cat
command, add the DSA keys for the second node to the authorized_keys
file, clicking Enter when you are prompted for a password, so that passwordless SSH is set up:
[grid@node1 .ssh]$ ssh node2 [grid@node2 grid]$ cd .ssh [grid@node2 ssh]$ cat id_dsa.pub >> authorized_keys
Repeat steps 2 and 3 from each node to each other member node in the cluster.
When you have added keys from each cluster node member to the authorized_keys
file on the last node you want to have as a cluster node member, then use scp
to copy the authorized_keys
file with the keys from all nodes back to each cluster node member, overwriting the existing version on the other nodes.
To confirm that you have all nodes in the authorized_keys
file, enter the command more authorized_keys
, and determine if there is a DSA key for each member node. The file lists the type of key (ssh-dsa
), followed by the key, and then followed by the user and server. For example:
ssh-dsa AAAABBBB . . . = grid@node1
Note:
Thegrid
user's /.ssh/authorized_keys
file on every node must contain the contents from all of the /.ssh/id_dsa.pub
files that you generated on all cluster nodes.After you have copied the authorized_keys
file that contains all keys to each node in the cluster, complete the following procedure, in the order listed. In this example, the Oracle Grid Infrastructure software owner is named grid
:
On the system where you want to run OUI, log in as the grid
user.
Use the following command syntax, where hostname1
, hostname2
, and so on, are the public host names (alias and fully qualified domain name) of nodes in the cluster to run SSH from the local node to each node, including from the local node to itself, and from each node to each other node:
[grid@nodename]$ ssh hostname1 date [grid@nodename]$ ssh hostname2 date . . .
For example:
[grid@node1 grid]$ ssh node1 date The authenticity of host 'node1 (xxx.xxx.100.101)' can't be established. DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,xxx.xxx.100.101' (DSA) to the list of known hosts. Mon Dec 4 11:08:13 PST 2006 [grid@node1 grid]$ ssh node1.example.com date The authenticity of host 'node1.example.com (xxx.xxx.100.101)' can't be established. DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1.example.com,xxx.xxx.100.101' (DSA) to the list of known hosts. Mon Dec 4 11:08:13 PST 2006 [grid@node1 grid]$ ssh node2 date Mon Dec 4 11:08:35 PST 2006 . . .
At the end of this process, the public host name for each member node should be registered in the known_hosts
file for all other cluster nodes.
If you are using a remote client to connect to the local node, and you see a message similar to "Warning: No xauth data; using fake authentication data for X11 forwarding," then this means that your authorized keys file is configured correctly, but your SSH configuration has X11 forwarding enabled. To correct this issue, proceed to Section 6.2.4, "Setting Remote Display and X11 Forwarding Configuration."
Repeat step 2 on each cluster node member.
If you have configured SSH correctly, then you can now use the ssh
or scp
commands without being prompted for a password. For example:
[grid@node1 ~]$ ssh node2 date Mon Feb 26 23:34:42 UTC 2009 [grid@node1 ~]$ ssh node1 date Mon Feb 26 23:34:48 UTC 2009
If any node prompts for a password, then verify that the ~/.ssh/authorized_keys
file on that node contains the correct public keys, and that you have created an Oracle software owner with identical group membership and IDs.
This section contains the following:
Note:
The kernel parameter and shell limit values shown in the following section are recommended values only. For production database systems, Oracle recommends that you tune these values to optimize the performance of the system. Refer to your operating system documentation for more information about tuning kernel parameters.During installation, or when you run the Cluster Verification Utility (cluvfy) with the flag -fixup
, a fixup script is generated. This script updates required kernel parameters if necessary to minimum values.
If you cannot use the fixup scripts, then review Table F-1 to set values manually:
Table F-1 Minimum Operating System Parameter Settings for Installation on Linux
Note:
If you intend to install Oracle Databases or Oracle RAC databases on the cluster, be aware that the size of the/dev/shm
mount area on each server must be greater than the system global area (SGA) and the program global area (PGA) of the databases on the servers. Review expected SGA and PGA sizes with database administrators to ensure that you do not have to increase /dev/shm
after databases are installed on the cluster.On SUSE Linux Enterprise Server systems only, complete the following steps as needed:
Enter the following command to cause the system to read the /etc/sysctl.conf
file when it restarts:
# /sbin/chkconfig boot.sysctl on
Enter the GID of the oinstall
group as the value for the parameter /proc/sys/vm/hugetlb_shm_group
. Doing this grants members of oinstall
a group permission to create shared memory segments.
For example, where the oinstall group GID is 1000:
# echo 1000 > /proc/sys/vm/hugetlb_shm_group
After running this command, use vi
to add the following text to /etc/sysctl.conf
, and enable the boot.sysctl
script to run on system restart:
vm.hugetlb_shm_group=1000
Note:
Only one group can be defined as thevm.hugetlb_shm_group
.Repeat steps 1 through 3 on all other nodes in the cluster.
If you do not use a Fixup script or CVU to set ephemeral ports, then set TCP/IP ephemeral port range parameters manually to provide enough ephemeral ports for the anticipated server workload. Ensure that the lower range is set to at least 9000 or higher, to avoid Well Known ports, and to avoid ports in the Registered Ports range commonly used by Oracle and other server ports. Set the port range high enough to avoid reserved ports for any applications you may intend to use. If the lower value of the range you have is greater than 9000, and the range is large enough for your anticipated workload, then you can ignore OUI warnings regarding the ephemeral port range.
For example, with IPv4, use the following command to check your current range for ephemeral ports:
$ cat /proc/sys/net/ipv4/ip_local_port_range 32768 61000
In the preceding example, the lowest port (32768) and the highest port (61000) are set to the default range.
If necessary, update the UDP and TCP ephemeral port range to a range high enough for anticipated system workloads, and to ensure that the ephemeral port range starts at 9000 and above. For example:
# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range
Oracle recommends that you make these settings permanent. For example, as root
, use a text editor to open /etc/sysctl.conf
, and add or change to the following: net.ipv4.ip_local_port_range = 9000 65500
, and then restart the network (# /etc/rc.d/init.d/network restart
). Refer to your Linux distribution system administration documentation for detailed information about how to automate this ephemeral port range alteration on system restarts.
For persistent device naming, you can configure ASMLIB or set udev
rules.
This section consists of the following:
Oracle recommends that you use Oracle ASM Filter Driver (ASMFD) to maintain device persistence. However, you can choose to use ASMLIB for device persistence.
Review the following section to configure Oracle ASMLIB:
Note:
Oracle ASMLIB is not supported on IBM:Linux on System z.The Oracle Automatic Storage Management (Oracle ASM) library driver (ASMLIB) simplifies the configuration and management of block disk devices by eliminating the need to rebind block disk devices used with Oracle ASM each time the system is restarted.
With ASMLIB, you define the range of disks you want to have made available as Oracle ASM disks. ASMLIB maintains permissions and disk labels that are persistent on the storage device, so that label is available even after an operating system upgrade. You can update storage paths on all cluster member nodes by running one oracleasm
command on each node, without the need to modify the udev
file manually to provide permissions and path persistence.
Note:
If you configure disks using ASMLIB, then you must change the disk discovery string to ORCL:*. If the disk string is set to ORCL:*, or is left empty (""), then the installer discovers these disks.To use the Oracle Automatic Storage Management Library Driver (ASMLIB) to configure Oracle ASM devices, complete the following tasks.
Installing and Configuring the Oracle ASM Library Driver Software
Configuring Disk Devices to Use Oracle ASM Library Driver on x86 Systems
Note:
To create a database during the installation using the Oracle ASM library driver, you must choose an installation method that runs ASMCA in interactive mode. You must also change the default disk discovery string toORCL:*.
ASMLIB is already included with Oracle Linux packages, and with SUSE Linux Enterprise Server. If you are a member of the Unbreakable Linux Network, then you can install the ASMLIB RPMs by subscribing to the Oracle Linux channel, and using yum
to retrieve the most current package for your system and kernel. For additional information, see the following URL:
http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html
To install and configure the ASMLIB driver software manually, follow these steps:
Enter the following command to determine the kernel version and architecture of the system:
# uname -rm
Download the required ASMLIB packages from the Oracle Technology Network website:
http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html
Note:
You must installoracleasm-support
package version 2.0.1 or later to use ASMLIB on Red Hat Enterprise Linux 5 Advanced Server. ASMLIB is already included with SUSE Linux Enterprise Server distributions.See Also:
My Oracle Support Note 1089399.1 for information about ASMLIB support with Red Hat distributions:https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1089399.1
Switch user to the root
user:
$ su -
Install the following packages in sequence, where version
is the version of the ASMLIB driver, arch
is the system architecture, and kernel
is the version of the kernel that you are using:
oracleasm-support-version.arch.rpm oracleasm-kernel-version.arch.rpm oracleasmlib-version.arch.rpm
Enter a command similar to the following to install the packages:
# rpm -ivh oracleasm-support-version.arch.rpm \ oracleasm-kernel-version.arch.rpm \ oracleasmlib-version.arch.rpm
For example, if you are using the Red Hat Enterprise Linux 5 AS kernel on an AMD64 system, then enter a command similar to the following:
# rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm \ oracleasm-2.6.18-194.26.1.el5xen-2.0.5-1.el5.x86_64.rpm \ oracleasmlib-2.0.4-1.el5.x86_64.rpm
Enter the following command to run the oracleasm
initialization script with the configure
option:
# /usr/sbin/oracleasm configure -i
Note:
Theoracleasm
command in /usr/sbin
is the command you should use. The /etc/init.d
path is not deprecated, but the oracleasm
binary in that path is now used typically for internal commands.Enter the following information in response to the prompts that the script displays:
Table F-2 ORACLEASM Configure Prompts and Responses
Prompt | Suggested Response |
---|---|
Default user to own the driver interface: |
Standard groups and users configuration: Specify the Oracle software owner user (for example, Job role separation groups and users configuration: Specify the Oracle Grid Infrastructure software owner user (for example, |
Default group to own the driver interface: |
Standard groups and users configuration: Specify the OSDBA group for the database (for example, Job role separation groups and users configuration: Specify the OSASM group for storage administration (for example, |
Start Oracle ASM Library driver on boot (y/n): |
Enter |
Scan for Oracle ASM disks on boot (y/n) |
Enter |
The script completes the following tasks:
Creates the /etc/sysconfig/oracleasm
configuration file
Creates the /dev/oracleasm
mount point
Mounts the ASMLIB driver file system
Note:
The ASMLIB driver file system is not a regular file system. It is used only by the Oracle ASM library to communicate with the Oracle ASM driver.Enter the following command to load the oracleasm
kernel module:
# /usr/sbin/oracleasm init
Repeat this procedure on all nodes in the cluster where you want to install Oracle RAC.
To configure the disk devices to use in an Oracle ASM disk group, follow these steps:
If you intend to use IDE, SCSI, or RAID devices in the Oracle ASM disk group, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.
Enter the following command to identify the device name for the disks to use, enter the following command:
# /sbin/fdisk -l
Depending on the type of disk, the device name can vary. Table F-3 describes some types of disk paths:
Table F-3 Types of Linux Storage Disk Paths
Disk Type | Device Name Format | Description |
---|---|---|
/dev/hdxn
|
In this example, |
|
/dev/sdxn
|
In this example, |
|
/dev/rd/cxdypz /dev/ida/cxdypz |
Depending on the RAID controller, RAID devices can have different device names. In the examples shown, |
To include devices in a disk group, you can specify either whole-drive device names or partition device names.
Note:
Oracle recommends that you create a single whole-disk partition on each disk.Use either fdisk
or parted
to create a single whole-disk partition on the disk devices.
Enter a command similar to the following to mark a disk as an Oracle ASM disk:
# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
In this example, DISK1
is the name you assign to the disk.
Note:
The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.If you are using a multi-pathing disk driver with Oracle ASM, then make sure that you specify the correct logical device name for the disk.
To make the disk available on the other nodes in the cluster, enter the following command as root
on each node:
# /usr/sbin/oracleasm scandisks
This command identifies shared disks attached to the node that are marked as Oracle ASM disks.
If you formatted the DASD with the compatible disk layout, then enter a command similar to the following to create a single whole-disk partition on the device:
# /sbin/fdasd -a /dev/dasdxxxx
Enter a command similar to the following to mark a disk as an ASM disk:
# /etc/init.d/oracleasm createdisk DISK1 /dev/dasdxxxx
In this example, DISK1
is a name that you want to assign to the disk.
Note:
The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.If you are using a multi-pathing disk driver with ASM, then make sure that you specify the correct logical device name for the disk.
To make the disk available on the other cluster nodes, enter the following command as root on each node:
# /etc/init.d/oracleasm scandisks
This command identifies shared disks attached to the node that are marked as ASM disks.
To administer the Oracle Automatic Storage Management library driver (ASMLIB) and disks, use the /usr/sbin/oracleasm
initialization script with different options, as described in Table F-4:
Table F-4 Disk Management Tasks Using ORACLEASM
Task | Command Example | Description |
---|---|---|
Configure or reconfigure ASMLIB |
oracleasm configure -i |
Use the To see command options, enter |
Change system restart load options for ASMLIB |
oracleasm enable |
Options are Use the |
Load or unload ASMLIB without restarting the system |
oracleasm restart |
Options are Use the |
Mark a disk for use with ASMLIB |
oracleasm createdisk VOL1 /dev/sda1 |
Use the oracleasm createdisk labelname devicepath |
Unmark a named disk device |
oracleasm deletedisk VOL1 |
Use the oracleasm deletedisk diskname Caution: Do not use this command to unmark disks that are being used by an Oracle Automatic Storage Management disk group. You must delete the disk from the Oracle Automatic Storage Management disk group before you unmark it. |
Determine if ASMLIB is using a disk device |
oracleasm querydisk |
Use the oracleasm querydisk diskname_devicename |
List Oracle ASMLIB disks |
oracleasm listdisks |
Use the |
Identify disks marked as ASMLIB disks |
oracleasm scandisks |
Use the |
Rename ASMLIB disks |
oracleasm renamedisk VOL1 VOL2 |
Use the
Use the Caution: You must ensure that all Oracle Database and Oracle ASM instances have ceased using the disk before you relabel the disk. If you do not do this, then you may lose data. |
Additional configuration is required to use the Oracle Automatic Storage Management library Driver (ASMLIB) with third party vendor multipath disks.
See Also:
My Oracle Support site for updates to supported storage options:https://support.oracle.com/
Oracle ASM requires that each disk is uniquely identified. If the same disk appears under multiple paths, then it causes errors. In a multipath disk configuration, the same disk can appear three times:
The initial path to the disk
The second path to the disk
The multipath disk access point
For example: If you have one local disk, /dev/sda
, and one disk attached with external storage, then your server shows two connections, or paths, to that external storage. The Linux SCSI driver shows both paths. They appear as /dev/sdb
and /dev/sdc
. The system may access either /dev/sdb
or /dev/sdc
, but the access is to the same disk.
If you enable multipathing, then you have a multipath disk (for example, /dev/multipatha
), which can access both /dev/sdb
and /dev sdc
; any I/O to multipatha
can use either the sdb
or sdc
path. If a system is using the /dev/sdb
path, and that cable is unplugged, then the system shows an error. But the multipath disk will switch from the /dev/sdb
path to the /dev/sdc
path.
Most system software is unaware of multipath configurations. They can use any paths (sdb
, sdc
or multipatha
). ASMLIB also is unaware of multipath configurations.
By default, ASMLIB recognizes the first disk path that Linux reports to it, but because it imprints an identity on that disk, it recognizes that disk only under one path. Depending on your storage driver, it may recognize the multipath disk, or it may recognize one of the single disk paths.
Instead of relying on the default, you should configure Oracle ASM to recognize the multipath disk.
The ASMLIB configuration file is located in the path /etc/sysconfig/oracleasm
. It contains all the startup configuration you specified with the command /etc/init.d/oracleasm configure
. That command cannot configure scan ordering.
The configuration file contains many configuration variables. The ORACLEASM_SCANORDER
variable specifies disks to be scanned first. The ORACLEASM_SCANEXCLUDE
variable specifies the disks that are to be ignored.
Configure values for ORACLEASM_SCANORDER
using space-delimited prefix strings. A prefix string is the common string associated with a type of disk. For example, if you use the prefix string sd
, then this string matches all SCSI devices, including /dev/sda
, /dev/sdb
, /dev/sdc
and so on. Note that these are not globs. They do not use wild cards. They are simple prefixes. Also note that the path is not a part of the prefix. For example, the /dev/
path is not part of the prefix for SCSI disks that are in the path /dev/sd
*.
For Oracle Linux and Red Hat Enterprise Linux version 5, when scanning, the kernel sees the devices as /dev/mapper/
XXX
entries. By default, the device file naming scheme udev
creates the /dev/mapper/
XXX
names for human readability. Any configuration using ORACLEASM_SCANORDER
should use the /dev/mapper/
XXX
entries.
To configure ASMLIB to select multipath disks first, complete the following procedure:
Using a text editor, open the ASMLIB configuration file /etc/sysconfig/oracleasm
.
Edit the ORACLEASM_SCANORDER variable to provide the prefix path of the multipath disks. For example, if the multipath disks use the prefix multipath
(/dev/mapper/multipatha
, /dev/mapper/multipathb
and so on), and the multipath disks mount SCSI disks, then provide a prefix path similar to the following:
ORACLEASM_SCANORDER="multipath sd"
Save the file.
When you have completed this procedure, then when ASMLIB scans disks, it first scans all disks with the prefix string multipath, and labels these disks as Oracle ASM disks using the /dev/mapper/multipath
X
value. It then scans all disks with the prefix string sd
. However, because ASMLIB recognizes that these disks have already been labeled with the /dev/mapper/multipath
string values, it ignores these disks. After scanning for the prefix strings multipath
and sd
, Oracle ASM then scans for any other disks that do not match the scan order.
In the example in step 2, the key word multipath is actually the alias for multipath devices configured in /etc/multipath.conf
under the multipaths
section. For example:
multipaths { multipath { wwid 3600508b4000156d700012000000b0000 alias multipath ... } multipath { ... alias mympath ... } ... }
The default device name is in the format /dev/mapper/mpath* (or a similar path).
To configure ASMLIB to exclude particular single path disks, complete the following procedure:
Using a text editor, open the ASMLIB configuration file /etc/sysconfig/oracleasm
.
Edit the ORACLEASM_SCANEXCLUDE
variable to provide the prefix path of the single path disks. For example, if you want to exclude the single path disks /dev sdb
and /dev/sdc
, then provide a prefix path similar to the following:
ORACLEASM_SCANEXCLUDE="sdb sdc"
Save the file.
When you have completed this procedure, then when ASMLIB scans disks, it scans all disks except for the disks with the sdb
and sdc
prefixes, so that it ignores /dev/sdb
and /dev/sdc
. It does not ignore other SCSI disks, nor multipath disks. If you have a multipath disk (for example, /dev/multipatha
), which accesses both /dev/sdb
and /dev/sdc
, but you have configured ASMLIB to ignore sdb
and sdc
, then ASMLIB ignores these disks and instead marks only the multipath disk as an Oracle ASM disk.
If you have Oracle ASMLIB installed but do not use it for storage persistence, you can deinstall it in rolling mode, one node at a time, as follows:
Login as root
.
Stop Oracle ASM and any running database instance on the node:
srvctl stop asm -node node_name srvctl stop instance -d db_unique_name -node node_name
To stop the last Oracle Flex ASM instance on the node, stop the Oracle Clusterware stack:
Grid_home/bin/crsctl stop crs
Stop Oracle ASMLIB:
/etc/init.d/oracleasm disable
Remove oracleasm
library and tools RPMs:
rpm -e oracleasm-support rpm -e oracleasmlib
Remove any oracleasm
kernel driver RPMs provided by vendors:
rpm -e oracleasm
Check if any oracleasm
RPMs remain:
rpm -qa| grep oracleasm
If any oracleasm
configuration files remain, remove them:
rpm -qa| grep oracleasm | xargs rpm -e
Oracle ASMLIB and the associated RPMs are removed.
Start the Oracle Clusterware stack. Optionally, you can install and configure Oracle ASM Filter Driver (Oracle ASMFD) before starting the Oracle Clusterware stack.
See Also:
Oracle Automatic Storage Management Administrator's Guide for more information about configuring storage device path persistence using Oracle ASM Filter DriverThis section contains the following information about preparing disk devices for use by Oracle ASM:
Note:
The operation ofudev
depends on the Linux version, vendor, and storage configuration.By default, the device file naming scheme udev
dynamically creates device file names when the server is started, and assigns ownership of them to root
. If udev
applies default settings, then it changes device file names and owners for voting files or Oracle Cluster Registry partitions, making them inaccessible when the server is restarted. For example, a voting file on a device named /dev/sdd
owned by the user grid
may be on a device named /dev/sdf
owned by root
after restarting the server.
If you use ASMFD, then you do not need to ensure permissions and device path persistency in udev
.
If you do not use ASMFD, then you must create a custom rules file. When udev
is started, it sequentially carries out rules (configuration directives) defined in rules files. These files are in the path /etc/udev/rules.d/
. Rules files are read in lexical order. For example, rules in the file 10-wacom.rules
are parsed and carried out before rules in the rules file 90-ib.rules
.
When specifying the device information in the UDEV rules file, ensure that the OWNER, GROUP and MODE are specified before all other characteristics in the order shown. For example, if you want to include the characteristic ACTION on the UDEV line, then specify ACTION after OWNER, GROUP, and MODE.
Where rules files describe the same devices, on the supported Linux kernel versions, the last file read is the one that is applied.
To configure a permissions file for disk devices for Oracle ASM, complete the following tasks:
To obtain information about existing block devices, run the command scsi_id
(/sbin/scsi_id
) on storage devices from one cluster node to obtain their unique device identifiers. When running the scsi_id
command with the -s
argument, the device path and name passed should be that relative to the sysfs
directory /sys
(for example, /block/device
) when referring to /sys/block/device
. For example:
# /sbin/scsi_id -g -s /block/sdb/sdb1 360a98000686f6959684a453333524174 # /sbin/scsi_id -g -s /block/sde/sde1 360a98000686f6959684a453333524179
Record the unique SCSI identifiers of clusterware devices, so you can provide them when required.
Note:
The commandscsi_id
should return the same device identifier value for a given device, regardless of which node the command is run from.Configure SCSI devices as trusted devices (white listed), by editing the /etc/scsi_id.config
file and adding options=-g
to the file. For example:
# cat > /etc/scsi_id.config vendor="ATA",options=-p 0x80 options=-g
Using a text editor, create a UDEV rules file for the Oracle ASM devices, setting permissions to 0660 for the installation owner and the group whose members are administrators of the Oracle Grid Infrastructure software. For example, on Oracle Linux, to create a role-based configuration rules.d file where the installation owner is grid
and the OSASM group asmadmin
, enter commands similar to the following:
# vi /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL=="sd?1", OWNER="grid", GROUP="asmadmin", MODE="0660", BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="14f70656e66696c00000000" KERNEL=="sd?2", OWNER="grid", GROUP="asmadmin", MODE="0660", BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="14f70656e66696c00000001" KERNEL=="sd?3", OWNER="grid", GROUP="asmadmin", MODE="0660", BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="14f70656e66696c00000002"
Copy the rules.d
file to all other nodes on the cluster. For example:
# scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracle-asmdevices.rules
Load updated block device partition tables on all member nodes of the cluster, using /sbin/partprobe
devicename
. You must do this as root
.
Run the command udevtest
(/sbin/udevtest
) to test the UDEV rules configuration you have created. The output should indicate that the devices are available and the rules are applied as expected. For example:
# udevtest /block/sdb/sdb1 main: looking at device '/block/sdb/sdb1' from subsystem 'block' udev_rules_get_name: add symlink 'disk/by-id/scsi-360a98000686f6959684a453333524174-part1' udev_rules_get_name: add symlink 'disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.887085-part1' udev_node_mknod: preserve file '/dev/.tmp-8-17', because it has correct dev_t run_program: '/lib/udev/vol_id --export /dev/.tmp-8-17' run_program: '/lib/udev/vol_id' returned with status 4 run_program: '/sbin/scsi_id' run_program: '/sbin/scsi_id' (stdout) '360a98000686f6959684a453333524174' run_program: '/sbin/scsi_id' returned with status 0 udev_rules_get_name: rule applied, 'sdb1' becomes 'data1' udev_device_event: device '/block/sdb/sdb1' validate currently present symlinks udev_node_add: creating device node '/dev/data1', major = '8', minor = '17', mode = '0640', uid = '0', gid = '500' udev_node_add: creating symlink '/dev/disk/by-id/scsi-360a98000686f6959684a453333524174-part1' to '../../data1' udev_node_add: creating symlink '/dev/disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.84187085 -part1' to '../../data1' main: run: 'socket:/org/kernel/udev/monitor' main: run: '/lib/udev/udev_run_devd' main: run: 'socket:/org/freedesktop/hal/udev_event' main: run: '/sbin/pam_console_apply /dev/data1 /dev/disk/by-id/scsi-360a98000686f6959684a453333524174-part1 /dev/disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.84187085- part1'
In the example output, note that applying the rules renames OCR device /dev/sdb1
to /dev/data1
.
Enter the command to restart the UDEV service.
On Oracle Linux and Red Hat Enterprise Linux, the commands are:
# /sbin/udevcontrol reload_rules # /sbin/start_udev
On SUSE Linux Enterprise Server, the command is:
# /etc/init.d boot.udev restart
To check your OCFS2 version manually, enter the following commands:
modinfo ocfs2 rpm -qa |grep ocfs2
Ensure that ocfs2console
and ocfs2-tools
are at least version 1.2.7, and that the other OCFS2 components correspond to the pattern ocfs2-kernel_version-1.2.7 or greater. If you want to install Oracle RAC on a shared home, then the OCFS2 version must be 1.4.1 or greater.
For information about OCFS2, refer to the following website:
http://oss.oracle.com/projects/ocfs2/