Oracle® Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Microsoft Windows x64 (64-Bit) Part Number E24169-04 |
|
|
PDF · Mobi · ePub |
This appendix explains the reasons for preinstallation tasks that you are asked to perform, and other installation concepts.
This appendix contains the following sections:
This section reviews concepts about Oracle Grid Infrastructure for a cluster preinstallation tasks. It contains the following sections:
The Oracle Inventory directory is the central inventory location for all Oracle software installed on a server. The location of the Oracle Inventory directory is <
System_drive
>:\Program Files\Oracle\Inventory
.
The first time you install Oracle software on a system, the installer checks to see if an Oracle Inventory directory exists. The location of the Oracle Inventory directory is determined by the Windows Registry key HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\inst_loc
. If an Oracle Inventory directory does not exist, then the installer creates one in the default location of C:\Program Files\Oracle\Inventory
.
Note:
Changing the value forinst_loc
in the Windows registry is not supported.By default, the Oracle Inventory directory is not installed under the Oracle base directory for the installation owner. This is because all Oracle software installations share a common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas there is a separate Oracle Base directory for each user.
During installation, you are prompted to specify an Oracle base location, which is owned by the user performing the installation. You can choose a location with an existing Oracle home, or choose another directory location that does not have the structure for an Oracle base directory. The default location for the Oracle base directory is <
SYSTEM_DRIVE
>:\app\
user_name
\
.
Using the Oracle base directory path helps to facilitate the organization of Oracle installations, parameter, diagnostic, and log files, and helps to ensure that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration.
Multiple Oracle Database installations can use the same Oracle base directory. The Oracle Grid Infrastructure installation uses a different directory path, one outside of Oracle base. If you use different operating system users to perform the Oracle software installations, then each user will have a different default Oracle base location.
During installation, you are asked to identify the planned use for each network interface that Oracle Universal Installer (OUI) detects on your cluster node. Identify each interface as a public or private interface, or as an interface that you do not want Oracle Clusterware to use. Public and virtual internet protocol (VIP) addresses are configured on public interfaces. Private addresses are configured on private interfaces.
Refer to the following sections for detailed information about each address type:
The public IP address is assigned dynamically using dynamic host configuration protocol (DHCP), or defined statically in a domain name system (DNS) or hosts
file. It uses the public interface (the interface with access available to clients).
Oracle Clusterware uses interfaces marked as private for internode communication. Each cluster node must have an interface that you identify during installation as a private interface. Private interfaces must have addresses configured for the interface itself, but no additional configuration is required. Oracle Clusterware uses the interfaces you identify as private for the cluster interconnect. Any interface that you identify as private must be on a subnet that connects to every node of the cluster. Oracle Clusterware uses all the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between nodes, Oracle strongly recommends using a physically separate, private network. If you configure addresses using a DNS, then you should ensure that the private IP addresses are reachable only by the cluster nodes.
After installation, if you modify the interconnect for Oracle Real Application Clusters (Oracle RAC) with the CLUSTER_INTERCONNECTS
initialization parameter, then you must change the interconnect to a private IP address, on a subnet that is not used with a public IP address, nor marked as a public subnet by oifcfg
. Oracle does not support changing the interconnect to an interface using a subnet that you have designated as a public subnet.
You should not use a firewall on the network with the private network IP addresses, because this can block interconnect traffic.
The virtual IP (VIP) address is registered in the grid naming service (GNS), or the DNS. Select an address for your VIP that meets the following requirements:
The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping
command)
The VIP is on the same subnet as your public interface
The GNS VIP address is a static IP address configured in the DNS. The DNS delegates queries to the GNS VIP address, and the GNS daemon responds to incoming name resolution requests at that address.
Within the subdomain, the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP addresses for a subdomain assigned to the cluster (for example, grid.example.com
), and delegate DNS requests for that subdomain to the GNS VIP address for the cluster, which GNS will serve. The set of IP addresses is provided to the cluster through DHCP, which must be available on the public network for the cluster.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about GNSOracle Database 11g release 2 clients connect to the database using a single client access name (SCAN). The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.
The SCAN is a VIP name, similar to the names used for VIP addresses, such as node1-vip
. However, unlike a VIP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.
The SCAN resolves to multiple IP addresses reflecting multiple listeners in the cluster that handle public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is made available to a client. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.
During installation, listeners are created. These SCAN listeners listen on the SCAN IP addresses. The SCAN listeners are started on nodes determined by Oracle Clusterware. Oracle Net Services routes application requests to the least loaded instance providing the service. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
The SCAN should be configured so that it is resolvable either by using GNS within the cluster, or by using DNS. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.
If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle Grid Infrastructure installation from the server node1
, the cluster name is mycluster
, and the GNS domain is grid.example.com
, then the SCAN Name is mycluster-scan.grid.example.com
.
Clients configured to use IP addresses for Oracle Database releases prior to Oracle Database 11g release 2 can continue to use their existing connection addresses; using SCANs is not required. When you upgrade to Oracle Clusterware 11g release 2 (11.2), the SCAN becomes available, and you should use the SCAN for connections to Oracle Database 11g release 2 or later databases. When an earlier version of Oracle Database is upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to connect to that database. The database registers with the SCAN listener through the remote listener parameter in the init.ora
file. The REMOTE_LISTENER parameter must be set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address with the SCAN, for example, using HOST= SCAN_name.
The SCAN is optional for most deployments. However, clients using Oracle Database 11g release 2 and later policy-managed databases using server pools must access the database using the SCAN. This is required because policy-managed databases can run on different servers at different times, so connecting to a particular node by using the virtual IP address for a policy-managed database is not possible.
Oracle Clusterware 11g release 2 (11.2) is automatically configured with Cluster Time Synchronization Service (CTSS). This service provides automatic synchronization of the time settings on all cluster nodes using the optimal synchronization strategy for the type of cluster you deploy. If you have an existing cluster synchronization service, such as network time protocol (NTP) or Windows Time Service, then it will start in an observer mode. Otherwise, it will start in an active mode to ensure that time is synchronized between cluster nodes. CTSS will not cause compatibility issues.
The CTSS module is installed as a part of Oracle Grid Infrastructure installation. CTSS daemons are started by the Oracle High Availability Services daemon (ohasd
), and do not require a command-line interface.
The following sections describe concepts related to Oracle Automatic Storage Management (Oracle ASM) storage:
Understanding Oracle Automatic Storage Management Cluster File System
About Converting Standalone Oracle ASM Installations to Clustered Installations
Oracle ASM has been extended to include a general purpose file system, called Oracle Automatic Storage Management Cluster File System (Oracle ACFS). Oracle ACFS is a new multi-platform, scalable file system, and storage management technology that extends Oracle ASM functionality to support customer files maintained outside of the Oracle Database. Files supported by Oracle ACFS include application executable files and application reports. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.
Note:
Oracle ACFS is only supported on Windows Server 2003 x64 and Windows Server 2003 R2 x64 in Oracle ASM 11g release 2 (11.2.0.1).
Starting with Oracle ASM 11g release 2 (11.2.0.2), Oracle ACFS is also supported on Windows Server 2008 x64 and Windows Server 2008 R2 x64.
If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home
\bin
) to upgrade the existing Oracle ASM instance to Oracle ASM 11g release 2 (11.2), and subsequently configure failure groups, Oracle ASM volumes, and Oracle ACFS.
Note:
You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another Oracle ASM home, then after installing the Oracle ASM 11g release 2 (11.2) software, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is Oracle ASM 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of Oracle ASM instances on an Oracle RAC installation are from an Oracle ASM release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be performed. Oracle ASM is then upgraded on all nodes to 11g release 2 (11.2).
If you have an existing standalone Oracle ASM installation on one or more nodes that are member nodes of the cluster, then OUI proceeds to install Oracle Grid Infrastructure for a cluster. If you place Oracle Clusterware files (OCR and voting disks) on Oracle ASM, then ASMCA is started at the end of the clusterware installation, and provides prompts for you to migrate and upgrade the Oracle ASM instance on the local node, so that you have an Oracle ASM 11g release 2 (11.2) installation.
On remote nodes, ASMCA identifies any standalone Oracle ASM instances that are running, and prompts you to shut down those Oracle ASM instances, and any database instances that use them. ASMCA then extends clustered Oracle ASM instances to all nodes in the cluster. However, disk group names on the cluster-enabled Oracle ASM instances must be different from existing standalone disk group names.
The following topics provide a short overview of server pools:
See Also:
Oracle Clusterware Administration and Deployment Guide for information about how to configure and administer server poolsWith Oracle Clusterware 11g release 2 (11.2) and later, resources managed by Oracle Clusterware are contained in logical groups of servers called server pools. Resources are hosted on a shared infrastructure and are contained within server pools. The resources are restricted with respect to their hardware resource (such as CPU and memory) consumption by policies, behaving as if they were deployed in a single-system environment.
You can choose to manage resources dynamically using server pools to provide policy-based management of resources in the cluster, or you can choose to manage resources using the traditional method of physically assigning resources to run on particular nodes.
The Oracle Grid Infrastructure installation owner has privileges to create and configure server pools, using the Server Control utility (SRVCTL), Oracle Enterprise Manager Database Control, or Oracle Database Configuration Assistant (DBCA).
Policy-based management provides the following functionality:
Enables dynamic capacity assignment when needed to provide server capacity in accordance with the priorities you set with policies
Enables allocation of resources by importance, so that applications obtain the required minimum resources, whenever possible, and so that lower priority applications do not take resources from more important applications
Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases
Applications and databases running in server pools do not share resources. Because of this, server pools isolate resources where necessary, but enable dynamic capacity assignments as required. Together with role-separated management, this capability addresses the needs of organizations that have a standardized cluster environment, but allow multiple administrator groups to share the common cluster infrastructure.
Server pools divide the cluster into groups of servers hosting the same or similar resources. They distribute a uniform workload (a set of Oracle Clusterware resources) over several servers in the cluster. For example, you can restrict Oracle databases to run only in a particular server pool. When you enable role-separated management, you can explicitly grant permission to operating system users to change attributes of certain server pools.
Caution:
By default, any named user may create a server pool. To restrict the operating system users that have this privilege, Oracle strongly recommends that you add specific users to the CRS Administrators list. See Oracle Clusterware Administration and Deployment Guide for more information about adding users to the CRS Administrators list.Top-level server pools:
Logically divide the cluster
Are always exclusive, meaning that one server can only reside in one particular server pool at a certain point in time
Each server pool has three attributes that are assigned when the server pool is created:
MIN_SIZE: The minimum number of servers the server pool can contain. If the number of servers in a server pool is below the value of this attribute, then Oracle Clusterware automatically moves servers from elsewhere into the server pool until the number of servers reaches the attribute value or until there are no free servers available from less important pools.
MAX_SIZE: The maximum number of servers the server pool should contain.
IMPORTANCE: A number from 0 to 1000 (0 being least important) that ranks a server pool among all other server pools in a cluster.
When Oracle Clusterware is installed, two server pools are created automatically: Generic and Free. All servers in a new installation are assigned to the Free server pool, initially. Servers move from the Free server pool to newly defined server pools automatically. When you upgrade Oracle Clusterware, all nodes are assigned to the Generic server pool, to ensure compatibility with database releases before Oracle Database 11g release 2 (11.2).
The Free server pool contains servers that are not assigned to any other server pools. The attributes of the Free server pool are restricted, as follows:
SERVER_NAMES, MIN_SIZE, and MAX_SIZE cannot be edited by the user
IMPORTANCE and ACL can be edited by the user
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about how servers are assigned to server pools.The Generic server pool stores Oracle Database 11g release 1 (11.1) databases and earlier releases, and administrator-managed databases that have fixed configurations. Additionally, the Generic server pool contains servers that match either of the following:
Servers that you specified in the HOSTING_MEMBERS attribute of all resources of the application resource type
Servers with names you specified in the SERVER_NAMES attribute of the server pools that list the Generic server pool as a parent server pool
The attributes of the Generic server pool are restricted, as follows:
The configuration attributes of the Generic server pool cannot be modified (all attributes are read-only).
When you specify a server name in the HOSTING_MEMBERS attribute, Oracle Clusterware only allows it if the server is:
Online and exists in the Generic server pool
Online and exists in the Free server pool, in which case Oracle Clusterware moves the server into the Generic server pool
Online and exists in any other server pool and the client is either a CRS Administrator or is allowed to use the server pool's servers, in which case, the server is moved into the Generic server pool
Offline and the client is a CRS Administrator
When you register a child server pool with the Generic server pool, Oracle Clusterware only allows it if the server names pass the same requirements as previously specified for the resources.
Servers are initially considered for assignment into the Generic server pool at cluster startup time or when a server is added to the cluster, and only after that to other server pools.
During an out-of-place upgrade of Oracle Grid Infrastructure, the installer installs the newer version of the software in a separate Grid home. Both versions of Oracle Clusterware are on each cluster member node, but only one version is active.
If you have separate Oracle Clusterware homes on each node, then you can perform an out-of-place upgrade on all nodes, or perform an out-of-place rolling upgrade, so that some nodes are running Oracle Clusterware from the earlier version Oracle Clusterware home, and other nodes are running Oracle Clusterware from the new Oracle Clusterware home. Rolling upgrade avoids downtime and ensure continuous availability while the software is upgraded to a new version.
An in-place upgrade of Oracle Clusterware 11g release 2 is not supported.
See Also:
Appendix D, "How to Upgrade to Oracle Grid Infrastructure 11g Release 2" for instructions on completing rolling upgrades