Skip Headers
Oracle® Database 2 Day + Data Warehousing Guide
11g Release 2 (11.2)

Part Number E25555-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

10 Eliminating Performance Bottlenecks

This chapter describes how to identify and reduce performance issues and contains the following topics:

Verifying That SQL Runs Efficiently

An important aspect of ensuring that your system performs well is to eliminate performance problems. This section describes some methods of finding and eliminating these bottlenecks, and contains the following topics:

Analyzing Optimizer Statistics

Optimizer statistics are a collection of data that describes more details about the database and the objects in the database. These statistics are stored in the data dictionary and are used by the query optimizer to choose the best execution plan for each SQL statement. Optimizer statistics include the following:

  • Table statistics (number of rows, blocks, and the average row length)

  • Column statistics (number of distinct values in a column, number of null values in a column, and data distribution)

  • Index statistics (number of leaf blocks, levels, and clustering factor)

  • System statistics (CPU and I/O performance and utilization)

The optimizer statistics are stored in the data dictionary. They can be viewed using data dictionary views similar to the following:

SELECT * FROM DBA_TAB_STATISTICS;

Because the objects in a database can constantly change, statistics must be regularly updated so that they accurately describe these database objects. Statistics are maintained automatically by Oracle Database, or you can maintain the optimizer statistics manually using the DBMS_STATS package.

Analyzing an Execution Plan

To execute a SQL statement, Oracle Database may perform many steps. Each of these steps either retrieves rows of data physically from the database or prepares them in some way for the user issuing the statement. The combination of the steps Oracle Database uses to execute a statement is called an execution plan. An execution plan includes an access path for each table that the statement accesses and an ordering of the tables (the join order) with the appropriate join method.

You can examine the execution plan chosen by the optimizer for a SQL statement by using the EXPLAIN PLAN statement. When the statement is issued, the optimizer chooses an execution plan and then inserts data describing the plan into a database table. Issue the EXPLAIN PLAN statement and then query the output table.

General guidelines for using the EXPLAIN PLAN statement are:

  • To use the SQL script UTLXPLAN.SQL to create a sample output table called PLAN_TABLE in your schema.

  • To include the EXPLAIN PLAN FOR clause before the SQL statement.

  • After issuing the EXPLAIN PLAN statement, to use one of the scripts or packages provided by Oracle Database to display the most recent plan table output.

  • The execution order in EXPLAIN PLAN output begins with the line that is indented farthest to the right. If two lines are indented equally, then the top line is usually executed first.

Example: Analyzing Explain Plan Output

The following statement shows the output of two EXPLAIN PLAN statements, one with dynamic pruning and one with static pruning.

To analyze EXPLAIN PLAN output:

EXPLAIN PLAN FOR
SELECT p.prod_name
, c.channel_desc
, SUM(s.amount_sold) revenue
FROM products p
, channels c
, sales s
WHERE s.prod_id = p.prod_id
AND s.channel_id = c.channel_id
AND s.time_id BETWEEN '01-12-2001' AND '31-12-2001'
GROUP BY p.prod_name
, c.channel_desc;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
WITHOUT TO_DATE
---------------------------------------------------------------------------------------------------
| Id| Operation                              |      Name    |Rows|Bytes|Cost  | Time |Pstart|Pstop|
                                                                        (%CPU)
---------------------------------------------------------------------------------------------------
|  0|SELECT STATEMENT                        |              | 252|15876|305(1)|00:00:06|    |     |
|  1| HASH GROUP BY                          |              | 252|15876|305(1)|00:00:06|    |     |
| *2|  FILTER                                |              |    |     |      |        |    |     |
| *3|   HASH JOIN                            |              |2255| 138K|304(1)|00:00:06|    |     |
|  4|    TABLE ACCESS FULL                   |     PRODUCTS |  72| 2160|  2(0)|00:00:01|    |     |
|  5|    MERGE JOIN                          |              |2286|75438|302(1)|00:00:06|    |     |
|  6|     TABLE ACCESS BY INDEX ROWID        |     CHANNELS |   5|   65|  2(0)|00:00:01|    |     |
|  7|      INDEX FULL SCAN                   |  CHANNELS_PK |   5|     |  1(0)|00:00:01|    |     |
| *8|     SORT JOIN                          |              |2286|45720|299(1)|00:00:06|    |     |
|  9|      PARTITION RANGE ITERATOR          |              |2286|45720|298(0)|00:00:06| KEY|  KEY|
| 10|       TABLE ACCESS BY LOCAL INDEX ROWID|        SALES |2286|45720|298(0)|00:00:06| KEY|  KEY|
| 11|        BITMAP CONVERSION TO ROWIDS     |              |    |     |      |        |    |     |
|*12|         BITMAP INDEX RANGE SCAN        |SALES_TIME_BIX|    |     |      |        | KEY|  KEY|
---------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter(TO_DATE('01-12-2001')<=TO_DATE('31-12-2001'))
   3 - access("S"."PROD_ID"="P"."PROD_ID")
   8 - access("S"."CHANNEL_ID"="C"."CHANNEL_ID")
       filter("S"."CHANNEL_ID"="C"."CHANNEL_ID")
  12 - access("S"."TIME_ID">='01-12-2001' AND "S"."TIME_ID"<='31-12-2001')
Note the values of KEY KEY for Pstart and Pstop.
 
WITH TO_DATE
--------------------------------------------------------------------------------------------------
|Id| Operation                 | Name   | Rows  | Bytes |Cost(%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------
| 0| SELECT STATEMENT          |        |   252 | 15876 |   31 (20)| 00:00:01 |       |       |
| 1|  HASH GROUP BY            |        |   252 | 15876 |   31 (20)| 00:00:01 |       |       |
|*2|   HASH JOIN               |        | 21717 |  1336K|   28 (11)| 00:00:01 |       |       |
| 3|    TABLE ACCESS FULL      |PRODUCTS|    72 |  2160 |    2  (0)| 00:00:01 |       |       |
|*4|    HASH JOIN              |        | 21717 |   699K|   26 (12)| 00:00:01 |       |       |
| 5|     TABLE ACCESS FULL     |CHANNELS|     5 |    65 |    3  (0)| 00:00:01 |       |       |
| 6|     PARTITION RANGE SINGLE|        | 21717 |   424K|   22 (10)| 00:00:01 |    20 |    20 |
|*7|      TABLE ACCESS FULL    |SALES   | 21717 |   424K|   22 (10)| 00:00:01 |    20 |    20 |
---------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("S"."PROD_ID"="P"."PROD_ID")
   4 - access("S"."CHANNEL_ID"="C"."CHANNEL_ID")
   7 - filter("S"."TIME_ID">=TO_DATE('2001-12-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
              "S"."TIME_ID"<=TO_DATE('2001-12-31 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
Note the values of 20 20 for Pstart and Pstop.

The first execution plan shows dynamic pruning using the KEY values for Pstart and Pstop respectively. Dynamic pruning means that the database will have to determine at execution time which partition or partitions to access. With static pruning, the database knows at parse time which partition or partitions to access, which leads to more efficient execution.

You can frequently improve the execution plan by using explicit date conversions. Using explicit date conversions is a best practice for optimal partition pruning and index usage.

Using Hints to Improve Data Warehouse Performance

Hints enable you to make decisions usually made by the optimizer. As an application developer, you might have information about your data that the optimizer does not know. Hints provide a mechanism to instruct the optimizer to choose a certain query execution plan based on specific criteria.

For example, you might know that a certain index is more selective for certain queries. Based on this information, you might be able to choose a more efficient execution plan than the optimizer. In this case, use hints to instruct the optimizer to use the optimal execution plan.

By default, Oracle Warehouse Builder includes hints to optimize a typical data load.

Example: Using Hints to Improve Data Warehouse Performance

Suppose you want to quickly run a summary across the sales table for last year while the system is otherwise idle. In this case, you could issue the following statement.

To use a hint to improve data warehouse performance:

SELECT /*+ PARALLEL(s,16) */ SUM(amount_sold)
FROM sales s
WHERE s.time_id BETWEEN TO_DATE('01-JAN-2005','DD-MON-YYYY')
  AND TO_DATE('31-DEC-2005','DD-MON-YYYY');

Another common use for hints in data warehouses is to ensure that records are efficiently loaded using compression. For this, you use the APPEND hint, as shown in the following SQL:

...
INSERT /* +APPEND */ INTO my_materialized_view
...

Using Advisors to Verify SQL Performance

Using the SQL Tuning Advisor and SQL Access Advisor, you can invoke the query optimizer in advisory mode to examine a SQL statement or set of SQL statements, and provide recommendations to improve their efficiency. The SQL Tuning Advisor and SQL Access Advisor can make various types of recommendations, such as creating SQL profiles, restructuring SQL statements, creating additional indexes or materialized views, and refreshing optimizer statistics. Additionally, Oracle Enterprise Manager enables you to accept and implement many of these recommendations in very few steps.

The SQL Access Advisor is primarily responsible for making schema modification recommendations, such as adding or dropping indexes and materialized views. It also recommends a partitioning strategy. The SQL Tuning Advisor makes other types of recommendations, such as creating SQL profiles and restructuring SQL statements. In some cases where significant performance improvements can be gained by creating a new index, the SQL Tuning Advisor may recommend doing so. However, these recommendations must be verified by running the SQL Access Advisor with a SQL workload that contains a set of representative SQL statements.

Example: Using the SQL Tuning Advisor to Verify SQL Performance

You can use the SQL Tuning Advisor to tune a single SQL statement or multiple SQL statements. When tuning multiple SQL statements, remember the SQL Tuning Advisor does not recognize interdependencies between the SQL statements. Instead, it is just meant to be a convenient way for you to run the SQL Tuning Advisor for a large number of SQL statements.

To run the SQL Tuning Advisor to verify SQL performance:

  1. Go to the Advisor Central page, then click SQL Advisors.

    The SQL Advisors page is displayed.

  2. Click Schedule SQL Tuning Advisor.

    The Schedule SQL Tuning Advisor page is displayed. A suggested name will be in the Name field, which you can modify. Then select Comprehensive to have a comprehensive analysis performed. Select Immediately for the Schedule. Select a appropriate SQL Tuning Set, and then click OK.

  3. The Processing page is displayed. Then the Recommendations page shows the recommendations for improving performance. Click View Recommendations.

    The Recommendations page is displayed.

  4. The recommendation is to create an index, which you can implement by clicking Implement. You may also want to run the SQL Access Advisor.

Improving Performance by Minimizing Resource Use

You can minimize resource use, and improve your data warehouse's performance through the use of the following capabilities:

Improving Performance: Partitioning

Data warehouses often contain large tables and require techniques both for managing these large tables and for providing good query performance across these large tables. This section describes partitioning, a key method for addressing these requirements. Two capabilities relevant for query performance in a data warehouse are partition pruning and partitionwise joins.

Improving Performance: Partition Pruning

Partition pruning is an essential performance feature for data warehouses. In partition pruning, the optimizer analyzes FROM and WHERE clauses in SQL statements to eliminate unneeded partitions when building the partition access list. This enables Oracle Database to perform operations only on those partitions that are relevant to the SQL statement. Oracle Database prunes partitions when you use range, LIKE, equality, and IN-list predicates on the range or list partitioning columns, and when you use equality and IN-list predicates on the hash partitioning columns.

Partition pruning dramatically reduces the amount of data retrieved from disk and shortens the use of processing time, which improves query performance and resource use. If you partition the index and table on different columns (with a global partitioned index), partition pruning eliminates index partitions even when the partitions of the underlying table cannot be eliminated.

Depending upon the actual SQL statement, Oracle Database may use static or dynamic pruning. Static pruning occurs at compile time; the information about the partitions is accessed beforehand, dynamic pruning occurs at run time; the partitions are accessed by a statement and are not known beforehand. A sample scenario for static pruning is a SQL statement that contains a WHERE clause with a constant literal on the partition key column. An example of dynamic pruning is the use of operators or functions in the WHERE clause.

Partition pruning affects the statistics of the objects where pruning will occur and will affect the execution plan of a statement.

Improving Performance: Partitionwise Joins

Partitionwise joins reduce query response time by minimizing the amount of data exchanged among parallel execution servers when joins execute in parallel. This significantly reduces response time and improves the use of both CPU and memory resources. In Oracle Real Application Clusters environments, partitionwise joins also avoid or at least limit the data traffic over the interconnection, which is the key to achieving good scalability for massive join operations.

Partitionwise joins can be full or partial. Oracle Database decides which type of join to use.

Example: Evaluating Partitioning with the SQL Access Advisor

You should always consider partitioning in data warehousing environments.

To evaluate partitioning:

  1. In the Advisor Central page, click SQL Advisors.

    The SQL Advisors page is displayed.

  2. Click SQL Access Advisor.

    The SQL Access Advisor page is displayed.

  3. From the Initial Options menu, select Use Default Options and click Continue.

  4. From the Workload Sources, select Current and Recent SQL Activity and click Next.

    The Recommendation Options page is displayed.

  5. Select Partitioning and then Comprehensive Mode, then click Next.

    The Schedule page is displayed.

  6. Enter SQLACCESStest1 into the Task Name field and click Next.

    The Review page is displayed. Click Submit.

  7. Click Submit.

    The Confirmation page is displayed.

  8. Select your task and click View Result. The Results for Task page is displayed, showing possible improvements as a result of partitioning.

    Figure 10-1 Partitioning Results

    Description of Figure 10-1 follows
    Description of "Figure 10-1 Partitioning Results"

Improving Performance: Query Rewrite and Materialized Views

In data warehouses, you can use materialized views to compute and store aggregated data such as the sum of sales. You can also use materialized views to compute joins with or without aggregations, and they are very useful for frequently executed expensive joins between large tables and expensive calculations. A materialized view eliminates the overhead associated with expensive joins and aggregations for a large or important class of queries because it computes and stores summarized data before processing large joins or queries. Materialized views in these environments are often referred to as summaries.

One of the major benefits of creating and maintaining materialized views is the ability to use the query rewrite feature, which transforms a SQL statement expressed in terms of tables or views into a statement accessing one or more materialized views that are defined on the detail tables. The transformation is transparent to the user or application, requiring no intervention and no reference to the materialized view in the SQL statement. Because the query rewrite feature is transparent, materialized views can be added or dropped just like indexes without invalidating the SQL in the application code.

When underlying tables contain large amounts of data, it is a resource intensive and time-consuming process to compute the required aggregates or to compute joins between these tables. In these cases, queries can take minutes or even hours to return the answer. Because materialized views contain already computed aggregates and joins, Oracle Database uses the powerful query rewrite process to quickly answer the query using materialized views.

Improving Performance: Indexes

Bitmap indexes are widely used in data warehousing environments. The environments typically have large amounts of data and ad hoc queries, but a low level of concurrent DML transactions. Fully indexing a large table with a traditional B-tree index can be prohibitively expensive in terms of disk space because the indexes can be several times larger than the data in the table. Bitmap indexes are typically only a fraction of the size of the indexed data in the table. For such applications, bitmap indexing provides the following:

  • Reduced response time for large classes of ad hoc queries

  • Reduced storage requirements compared to other indexing techniques

  • Dramatic performance gains even on hardware with a relatively small number of CPUs or a small amount of memory

  • Efficient maintenance during parallel DML and loads

Improving Performance: Compression

During bulk-load operations, Oracle Database can compress the data being loaded. Oracle Database handles data transformation and compression internally and requires no application changes to use compression. Compression can help improve performance for queries that scan large amounts of data, by reducing the amount of I/O required to scan that data.

No special installation is required to configure this feature. However, to use this feature, the database compatibility parameter must be set to 11.2.0 or higher.

Note:

Hybrid Columnar Compression is a feature of certain Oracle storage systems. See Oracle Database Concepts for more information.

Improving Performance: DBMS_COMPRESSION Package

The PL/SQL package DBMS_COMPRESSION provides a compression advisor interface to help choose the correct compression level for an application. The compression advisor analyzes the objects in the database and estimates the possible compression ratios that could be achieved.

See Also:

Improving Performance: table_compress clause of CREATE TABLE and ALTER TABLE

The table_compress clause of the CREATE TABLE and ALTER TABLE statements provides COMPRESS, which takes a parameter for compression level. Use COMPRESS to instruct the database whether to compress data segments to reduce disk use. All forms of table compression are generally useful in OLAP environments and data warehouses, where the number of insert and update operations is small; some forms are also useful in OLTP environments.

Note:

For compression to be enabled on a table, it must be turned on at table creation time, or the table must be changed to enable it.

Using Resources Optimally

You can maximize how resources are used in your system by ensuring that operations run in parallel whenever possible. Database operations run faster if they are not constrained by resources. The operation may be constrained by CPU resources, I/O capacity, memory, or interconnection traffic (in a cluster). To improve the performance of database operations, you focus on the performance problem and try to eliminate it (so that the problem might shift to another resource). Oracle Database provides functions to optimize the use of available resources and to avoid using unnecessary resources.

Optimizing Performance with Parallel Execution

Parallel execution dramatically reduces response time for data-intensive operations on large databases typically associated with a decision support system (DSS) and data warehouses. You can also implement parallel execution on certain types of online transaction processing (OLTP) and hybrid systems. Parallel execution is sometimes called parallelism. Parallelism is breaking down a task so that, instead of one process doing all the work in a query, many processes do part of the work at the same time. An example of this is when four processes handle four different quarters in a year instead of one process handling all four quarters by itself. The improvement in performance can be quite high. Parallel execution improves processing for the following:

  • Queries requiring large table scans, joins, or partitioned index scans

  • Creation of large indexes

  • Creation of large tables (including materialized views)

  • Bulk insert, update, merge, and delete operations

You can also use parallel execution to access object types within an Oracle database. For example, you can use parallel execution to access large objects (LOBs).

Parallel execution benefits systems with all of the following characteristics:

  • Symmetric multiprocessors (SMPs), clusters, or massively parallel systems

  • Sufficient I/O bandwidth

  • Underutilized or intermittently used CPUs (for example, systems where CPU usage is typically less than 30 percent)

  • Sufficient memory to support additional memory-intensive processes, such as sorts, hashing, and I/O buffers

If your system lacks any of these characteristics, then parallel execution might not significantly improve performance. In fact, parallel execution might reduce system performance on overutilized systems or systems with small I/O bandwidth.

How Parallel Execution Works

Parallel execution divides the task of running a SQL statement into multiple small units, each of which is executed by a separate process. Also, the incoming data (tables, indexes, partitions) can be divided into parts called granules. The user shadow process takes on the role as parallel execution coordinator or query coordinator. The query coordinator performs the following tasks:

  • Parses the query and determines the degree of parallelism

  • Allocates one or two sets of slaves (threads or processes)

  • Controls the query and sends instructions to the parallel query slaves

  • Determines which tables or indexes must be scanned by the parallel query slaves

  • Produces the final output to the user

Setting the Degree of Parallelism

The parallel execution coordinator may enlist two or more of the instance's parallel execution servers to process a SQL statement. The number of parallel execution servers associated with a single operation is known as the degree of parallelism or DOP.

A single operation is a part of a SQL statement, such as an ORDER BY operation or a full table scan to perform a join on a non-indexed column table.

The degree of parallelism is specified in the following ways:

  • At the statement level with PARALLEL hints

  • At the session level by issuing the ALTER SESSION FORCE PARALLEL statement

  • At the table level in the table's definition

  • At the index level in the index's definition

Example: Setting the Degree of Parallelism

Suppose that you want to set the DOP to 4 on a table.

To set the degree of parallelism:

Issue the following statement:

ALTER TABLE orders PARALLEL 4;

About Wait Events

Wait events are statistics that are incremented by a server process to indicate that the server process had to wait for an event to complete before being able to continue processing. A session could wait for a variety of reasons, including waiting for more input, waiting for the operating system to complete a service such as a disk write operation, or it could wait for a lock or latch.

When a session is waiting for resources, it is not doing any useful work. A large number of wait events is a source of concern. Wait event data reveals various symptoms of problems that might be affecting performance, such as latch contention, buffer contention, and I/O contention.