Skip Headers
Oracle® Grid Infrastructure Installation Guide
11g Release 2 (11.2) for IBM AIX on POWER Systems (64-Bit)

Part Number E24614-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

1 Typical Installation for Oracle Grid Infrastructure for a Cluster

This chapter describes the difference between a Typical and Advanced installation for Oracle Grid Infrastructure for a cluster, and describes the steps required to complete a Typical installation.

This chapter contains the following sections:

1.1 Typical and Advanced Installation

There are two installation options for Oracle Grid Infrastructure installations:

1.2 Preinstallation Steps Completed Using Typical Installation

With Oracle Clusterware 11g release 2 (11.2), during installation Oracle Universal Installer (OUI) generates Fixup scripts (runfixup.sh) that you can run to complete required preinstallation steps.

Fixup scripts are generated during installation. You are prompted to run scripts as root in a separate terminal session. When you run scripts, they complete the following configuration tasks:

1.3 Preinstallation Steps Requiring Manual Tasks

Complete the following manual configuration tasks

1.3.1 Verify System Requirements

Enter the following commands to check available memory:

# /usr/sbin/lsattr -E -l sys0 -a realmem
# /usr/sbin/lsps -a

The minimum required RAM is at least 2.5 GB of RAM for Oracle Grid Infrastructure for a Cluster installations, including installations where you plan to install Oracle RAC.

The minimum required swap space is 1.5 GB. For systems with 2.5 GB to 16 GB RAM, Oracle recommends that you use swap space equal to RAM. For systems with more than 16 GB RAM, use 16 GB of RAM for swap space. If the swap space and the Grid home are on the same filesystem, then add together their respective disk space requirements for the total minimum space required.

Verify the space available for Oracle Clusterware files. For example:

GPFS:

/usr/bin/df -k

To check raw device volumes in preparation for installing Oracle ASM disk groups, use the following checks:

Raw Logical Volumes in Concurrent VG (HACMP): In the following example, the variable lv_name is the name of the raw logical volume whose space you want to verify:

lslv lv_name

Raw hard disks: In the following example, the variable rhdisk# is the raw hard disk number that you want to verify, and the variable size_mb is the size in megabytes of the partition that you want to verify:

lsattr -El rhdisk# -a size_mb

If you use normal redundancy for OCR and voting disk files, which is 3 Oracle Cluster Registries (OCR) and 3 voting disks, ideally, in different file systems on independent disks, then you should have at least 1 GB of disk space available on separate physical disks reserved for Oracle Clusterware files. Each file system for the Oracle Clusterware files should be at least 280 MB in size.

Note:

You cannot install OCR or voting disk files on raw partitions. You can install only on Oracle ASM, or on supported network-attached storage or cluster file systems. The only use for raw devices is as ASM disks.

To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.

Ensure you have at least 13 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home) This includes Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) files and log files, Oracle ACFS log files, and includes the Cluster Health Monitor repository.

/usr/bin/df -k /tmp

Ensure that you have at least 1 GB of space in /tmp. If this space is not available, then increase the size, or delete unnecessary files in /tmp.

1.3.2 Check Network Requirements

Ensure that you have the following available:

1.3.2.1 Single Client Access Name (SCAN) for the Cluster

During Typical installation, you are prompted to confirm the default Single Client Access Name (SCAN), which is used to connect to databases within the cluster irrespective of which nodes they are running on. By default, the name used as the SCAN is also the name of the cluster. The default value for the SCAN is based on the local node name. If you change the SCAN from the default, then the name that you use must be globally unique throughout your enterprise.

In a Typical installation, the SCAN is also the name of the cluster. The SCAN and cluster name must be at least one character long and no more than 15 characters in length, must be alphanumeric, and may contain hyphens (-).

For example:

NE-Sa89

If you require a SCAN that is longer than15 characters, then be aware that the cluster name defaults to the first 15 characters of the SCAN.

1.3.2.2 IP Address Requirements

Before starting the installation, you must have at least two interfaces configured on each node: One for the private IP address and one for the public IP address.

1.3.2.2.1 IP Address Requirements for Manual Configuration

If you do not enable GNS, then the public and virtual IP addresses for each node must be static IP addresses, configured before installation for each node, but not currently in use. Public and virtual IP addresses must be on the same subnet.

Oracle Clusterware manages private IP addresses in the private subnet on interfaces you identify as private during the installation interview.

The cluster must have the following addresses configured:

The cluster must have the following addresses configured:

  • A public IP address for each node, with the following characteristics:

    • Static IP address

    • Configured before installation for each node, and resolvable to that node before installation

    • On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses

  • A virtual IP address for each node, with the following characteristics:

    • Static IP address

    • Configured before installation for each node, but not currently in use

    • On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses

  • A Single Client Access Name (SCAN) for the cluster, with the following characteristics:

    • Three Static IP addresses configured on the domain name server (DNS) before installation so that the three IP addresses are associated with the name provided as the SCAN, and all three addresses are returned in random order by the DNS to the requestor

    • Configured before installation in the DNS to resolve to addresses that are not currently in use

    • Given a name that does not begin with a numeral

    • On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses

    • Conforms with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_").

  • A private IP address for each node, with the following characteristics:

    • Static IP address

    • Configured before installation, but on a separate, private network, with its own subnet, that is not resolvable except by other cluster member nodes

After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.

Note:

Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve SCANs, then you will only be able to resolve to one IP address and you will have only one SCAN address.

See Also:

Appendix C, "Understanding Network Addresses" for more information about network addresses

1.3.2.3 Redundant Interconnect Usage

In previous releases, to make use of redundant networks for the interconnect, bonding, trunking, teaming, or similar technology was required. Oracle Grid Infrastructure and Oracle RAC can now make use of redundant network interconnects, without the use of other network technology, to enhance optimal communication in the cluster. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).

Redundant Interconnect Usage enables load-balancing and high availability across multiple (up to 4) private networks (also known as interconnects).

1.3.2.4 Intended Use of Network Interfaces

During installation, you are asked to identify the planned use for each network interface that OUI detects on your cluster node. You must identify each interface as a public or private interface, or as "do not use." For interfaces that you plan to have used for other purposes—for example, an interface dedicated to a network file system—you must identify those instances as "do not use" interfaces, so that Oracle Clusterware ignores them.

Redundant Interconnect Usage cannot protect interfaces used for public communication. If you require high availability or load balancing for public interfaces, then use a third party solution. Typically, bonding, trunking or similar technologies can be used for this purpose.

You can enable Redundant Interconnect Usage for the private network by selecting multiple interfaces to use as private interfaces. Redundant Interconnect Usage creates a redundant interconnect when you identify more than one interface as private. This functionality is available starting with Oracle Grid Infrastructure 11g Release 2 (11.2.0.2).

1.3.3 Check Operating System Packages

Refer to the tables listed in Section 2.8, "Checking the Software Requirements" for the list of required packages for your operating system.

1.3.4 Create Groups and Users

Enter the following commands to create default groups and users:

One system privileges group for all operating system-authenticated administration privileges, including Oracle RAC (if installed):

# mkgroup -'A' id='1000' adms='root' oinstall 
# mkgroup -'A' id='1031' adms='root' dba
# mkuser id='1100' pgrp='oinstall' groups='dba' home='/home/grid' grid
# mkuser id='1101' pgrp='oinstall' groups='dba' home='/home/oracle' oracle
# mkdir -p  /u01/app/11.2.0/grid
# chown -R grid:oinstall /u01
# mkdir /u01/app/oracle
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/

This set of commands creates a single installation owner, with required system privileges groups to grant the OraInventory system privileges (oinstall), and to grant the OSASM/SYSASM and OSDBA/SYSDBA system privileges. It also creates the Oracle base for both Oracle Grid Infrastructure and Oracle RAC, /u01/app/oracle. It creates the Grid home (the location where Oracle Grid Infrastructure binaries are stored), /u01/app/11.2.0/grid.

Ensure that the Oracle Grid Infrastructure installation owner account has the capabilities CAP_NUMA_ATTACH, CAP_BYPASS_RAC_VMM, and CAP_PROPAGATE.

To check existing capabilities, enter the following command as root; in this example, the Grid installation user account is grid:

# /usr/bin/lsuser -a capabilities grid
 

To add capabilities, enter a command similar to the following:

# /usr/bin/chuser 
capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid 

Set the password on the grid installation owner account:

passwd grid

Repeat this process for each cluster member node.

1.3.5 Configure Oracle Installation Owner Shell Limits

Set shell limits for the Oracle Grid Infrastructure installation owner and for root to unlimited. Verify that unlimited is set for both accounts either by using the smit utility or by editing the /etc/security/limits file. The root user requires these settings because the crs daemon (crsd) runs as root. Add the following lines to the limits file:

default:
        fsize = -1
        core = 2097151
        cpu = -1
        data = -1
        rss = -1
        stack = -1
        nofiles = -1

1.3.6 Check Storage

You must have space available either on a supported file system, or on Oracle Automatic Storage Management for Oracle Clusterware files (voting disks and Oracle Cluster Registries), and for Oracle Database files, if you install standalone or Oracle Real Application Clusters Databases. Creating Oracle Clusterware files on block or raw devices is no longer supported for new installations.

Note:

When using Oracle Automatic Storage Management (Oracle ASM) for either the Oracle Clusterware files or Oracle Database files, Oracle creates one Oracle ASM instance on each node in the cluster, regardless of the number of databases.

1.3.7 Prepare Storage for Oracle Automatic Storage Management

Review the relevant sections in Chapter 3 for the installation option you want to configure.

1.3.8 Install Oracle Grid Infrastructure Software

  1. Start OUI from the root level of the installation media. For example:

    ./runInstaller
    
  2. Select Install and Configure Grid Infrastructure for a Cluster, then select Typical Installation. In the installation screens that follow, enter the configuration information as prompted.

    If you receive an installation verification error that cannot be fixed using a fixup script, then review Chapter 2, "Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks" to find the section for configuring cluster nodes. After completing the fix, continue with the installation until it is complete.