Chapter 15 The InnoDB Storage Engine

Table of Contents

15.1 Introduction to InnoDB
15.1.1 Benefits of Using InnoDB Tables
15.1.2 Best Practices for InnoDB Tables
15.1.3 Checking InnoDB Availability
15.1.4 Testing and Benchmarking with InnoDB
15.1.5 Turning Off InnoDB
15.2 InnoDB and the ACID Model
15.3 InnoDB Multi-Versioning
15.4 InnoDB Architecture
15.4.1 Buffer Pool
15.4.2 Change Buffer
15.4.3 Adaptive Hash Index
15.4.4 Redo Log Buffer
15.4.5 System Tablespace
15.4.6 InnoDB Data Dictionary
15.4.7 Doublewrite Buffer
15.4.8 Undo Logs
15.4.9 File-Per-Table Tablespaces
15.4.10 General Tablespaces
15.4.11 Undo Tablespace
15.4.12 Temporary Tablespace
15.4.13 Redo Log
15.5 InnoDB Locking and Transaction Model
15.5.1 InnoDB Locking
15.5.2 InnoDB Transaction Model
15.5.3 Locks Set by Different SQL Statements in InnoDB
15.5.4 Phantom Rows
15.5.5 Deadlocks in InnoDB
15.6 InnoDB Configuration
15.6.1 InnoDB Startup Configuration
15.6.2 Configuring InnoDB for Read-Only Operation
15.6.3 InnoDB Buffer Pool Configuration
15.6.4 Configuring the Memory Allocator for InnoDB
15.6.5 Configuring InnoDB Change Buffering
15.6.6 Configuring Thread Concurrency for InnoDB
15.6.7 Configuring the Number of Background InnoDB I/O Threads
15.6.8 Using Asynchronous I/O on Linux
15.6.9 Configuring the InnoDB Master Thread I/O Rate
15.6.10 Configuring Spin Lock Polling
15.6.11 Configuring InnoDB Purge Scheduling
15.6.12 Configuring Optimizer Statistics for InnoDB
15.6.13 Configuring the Merge Threshold for Index Pages
15.7 InnoDB Tablespaces
15.7.1 Resizing the InnoDB System Tablespace
15.7.2 Changing the Number or Size of InnoDB Redo Log Files
15.7.3 Using Raw Disk Partitions for the System Tablespace
15.7.4 InnoDB File-Per-Table Tablespaces
15.7.5 Creating a File-Per-Table Tablespace Outside the Data Directory
15.7.6 Copying File-Per-Table Tablespaces to Another Server
15.7.7 Storing InnoDB Undo Logs in Separate Tablespaces
15.7.8 Truncating Undo Logs That Reside in Undo Tablespaces
15.7.9 InnoDB General Tablespaces
15.7.10 InnoDB Tablespace Encryption
15.8 InnoDB Tables and Indexes
15.8.1 Creating InnoDB Tables
15.8.2 Role of the .frm File for InnoDB Tables
15.8.3 Physical Row Structure of InnoDB Tables
15.8.4 Moving or Copying InnoDB Tables to Another Machine
15.8.5 Converting Tables from MyISAM to InnoDB
15.8.6 AUTO_INCREMENT Handling in InnoDB
15.8.7 InnoDB and FOREIGN KEY Constraints
15.8.8 Limits on InnoDB Tables
15.8.9 Clustered and Secondary Indexes
15.8.10 InnoDB FULLTEXT Indexes
15.8.11 Physical Structure of an InnoDB Index
15.8.12 Sorted Index Builds
15.9 InnoDB Table and Page Compression
15.9.1 InnoDB Table Compression
15.9.2 InnoDB Page Compression
15.10 InnoDB File-Format Management
15.10.1 Enabling File Formats
15.10.2 Verifying File Format Compatibility
15.10.3 Identifying the File Format in Use
15.10.4 Modifying the File Format
15.11 InnoDB Row Storage and Row Formats
15.11.1 Overview of InnoDB Row Storage
15.11.2 Specifying the Row Format for a Table
15.11.3 DYNAMIC and COMPRESSED Row Formats
15.11.4 COMPACT and REDUNDANT Row Formats
15.12 InnoDB Disk I/O and File Space Management
15.12.1 InnoDB Disk I/O
15.12.2 File Space Management
15.12.3 InnoDB Checkpoints
15.12.4 Defragmenting a Table
15.12.5 Reclaiming Disk Space with TRUNCATE TABLE
15.13 InnoDB and Online DDL
15.13.1 Overview of Online DDL
15.13.2 Performance and Concurrency Considerations for Online DDL
15.13.3 SQL Syntax for Online DDL
15.13.4 Combining or Separating DDL Statements
15.13.5 Examples of Online DDL
15.13.6 Implementation Details of Online DDL
15.13.7 How Crash Recovery Works with Online DDL
15.13.8 Online DDL for Partitioned InnoDB Tables
15.13.9 Limitations of Online DDL
15.14 InnoDB Startup Options and System Variables
15.15 InnoDB INFORMATION_SCHEMA Tables
15.15.1 InnoDB INFORMATION_SCHEMA Tables about Compression
15.15.2 InnoDB INFORMATION_SCHEMA Transaction and Locking Information
15.15.3 InnoDB INFORMATION_SCHEMA System Tables
15.15.4 InnoDB INFORMATION_SCHEMA FULLTEXT Index Tables
15.15.5 InnoDB INFORMATION_SCHEMA Buffer Pool Tables
15.15.6 InnoDB INFORMATION_SCHEMA Metrics Table
15.15.7 InnoDB INFORMATION_SCHEMA Temporary Table Information Table
15.15.8 Retrieving InnoDB Tablespace Metadata from INFORMATION_SCHEMA.FILES
15.16 InnoDB Integration with MySQL Performance Schema
15.16.1 Monitoring ALTER TABLE Progress for InnoDB Tables Using Performance Schema
15.16.2 Monitoring InnoDB Mutex Waits Using Performance Schema
15.17 InnoDB Monitors
15.17.1 InnoDB Monitor Types
15.17.2 Enabling InnoDB Monitors
15.17.3 InnoDB Standard Monitor and Lock Monitor Output
15.18 InnoDB Backup and Recovery
15.18.1 InnoDB Backup
15.18.2 InnoDB Recovery
15.19 InnoDB and MySQL Replication
15.20 InnoDB memcached Plugin
15.20.1 Benefits of the InnoDB memcached Plugin
15.20.2 InnoDB memcached Architecture
15.20.3 Setting Up the InnoDB memcached Plugin
15.20.4 Security Considerations for the InnoDB memcached Plugin
15.20.5 Writing Applications for the InnoDB memcached Plugin
15.20.6 The InnoDB memcached Plugin and Replication
15.20.7 InnoDB memcached Plugin Internals
15.20.8 Troubleshooting the InnoDB memcached Plugin
15.21 InnoDB Troubleshooting
15.21.1 Troubleshooting InnoDB I/O Problems
15.21.2 Forcing InnoDB Recovery
15.21.3 Troubleshooting InnoDB Data Dictionary Operations
15.21.4 InnoDB Error Handling

15.1 Introduction to InnoDB

InnoDB is a general-purpose storage engine that balances high reliability and high performance. In MySQL 5.7, InnoDB is the default MySQL storage engine. Unless you have configured a different default storage engine, issuing a CREATE TABLE statement without an ENGINE= clause creates an InnoDB table.

Key Advantages of InnoDB

Key advantages of InnoDB include:

Table 15.1 InnoDB Storage Engine Features

Storage limits64TBTransactionsYesLocking granularityRow
MVCCYesGeospatial data type supportYesGeospatial indexing supportYes[a]
B-tree indexesYesT-tree indexesNoHash indexesNo[b]
Full-text search indexesYes[c]Clustered indexesYesData cachesYes
Index cachesYesCompressed dataYes[d]Encrypted data[e]Yes
Cluster database supportNoReplication support[f]YesForeign key supportYes
Backup / point-in-time recovery[g]YesQuery cache supportYesUpdate statistics for data dictionaryYes

[a] InnoDB support for geospatial indexing is available in MySQL 5.7.5 and higher.

[b] InnoDB utilizes hash indexes internally for its Adaptive Hash Index feature.

[c] InnoDB support for FULLTEXT indexes is available in MySQL 5.6.4 and higher.

[d] Compressed InnoDB tables require the InnoDB Barracuda file format.

[e] Implemented in the server (via encryption functions). Data-at-rest tablespace encryption is available in MySQL 5.7 and higher.

[f] Implemented in the server, rather than in the storage engine.

[g] Implemented in the server, rather than in the storage engine.


To compare the features of InnoDB with other storage engines provided with MySQL, see the Storage Engine Features table in Chapter 16, Alternative Storage Engines.

InnoDB Enhancements and New Features

For information about InnoDB enhancements and new features in MySQL 5.7, refer to:

Additional InnoDB Information and Resources

15.1.1 Benefits of Using InnoDB Tables

If you use MyISAM tables but are not committed to them for technical reasons, you may find InnoDB tables beneficial for the following reasons:

  • If your server crashes because of a hardware or software issue, regardless of what was happening in the database at the time, you don't need to do anything special after restarting the database. InnoDB crash recovery automatically finalizes any changes that were committed before the time of the crash, and undoes any changes that were in process but not committed. Just restart and continue where you left off.

  • The InnoDB storage engine maintains its own buffer pool that caches table and index data in main memory as data is accessed. Frequently used data is processed directly from memory. This cache applies to many types of information and speeds up processing. On dedicated database servers, up to 80% of physical memory is often assigned to the InnoDB buffer pool.

  • If you split up related data into different tables, you can set up foreign keys that enforce referential integrity. Update or delete data, and the related data in other tables is updated or deleted automatically. Try to insert data into a secondary table without corresponding data in the primary table, and the bad data gets kicked out automatically.

  • If data becomes corrupted on disk or in memory, a checksum mechanism alerts you to the bogus data before you use it.

  • When you design your database with appropriate primary key columns for each table, operations involving those columns are automatically optimized. It is very fast to reference the primary key columns in WHERE clauses, ORDER BY clauses, GROUP BY clauses, and join operations.

  • Inserts, updates, and deletes are optimized by an automatic mechanism called change buffering. InnoDB not only allows concurrent read and write access to the same table, it caches changed data to streamline disk I/O.

  • Performance benefits are not limited to giant tables with long-running queries. When the same rows are accessed over and over from a table, a feature called the Adaptive Hash Index takes over to make these lookups even faster, as if they came out of a hash table.

  • You can compress tables and associated indexes.

  • You can create and drop indexes with much less impact on performance and availability.

  • Truncating a file-per-table tablespace is very fast, and can free up disk space for the operating system to reuse, rather than freeing up space within the system tablespace that only InnoDB could reuse.

  • The storage layout for table data is more efficient for BLOB and long text fields, with the DYNAMIC row format.

  • You can monitor the internal workings of the storage engine by querying INFORMATION_SCHEMA tables.

  • You can monitor the performance details of the storage engine by querying Performance Schema tables.

  • You can freely mix InnoDB tables with tables from other MySQL storage engines, even within the same statement. For example, you can use a join operation to combine data from InnoDB and MEMORY tables in a single query.

  • InnoDB has been designed for CPU efficiency and maximum performance when processing large data volumes.

  • InnoDB tables can handle large quantities of data, even on operating systems where file size is limited to 2GB.

For InnoDB-specific tuning techniques you can apply in your application code, see Section 9.5, “Optimizing for InnoDB Tables”.

15.1.2 Best Practices for InnoDB Tables

This section describes best practices when using InnoDB tables.

  • Specifying a primary key for every table using the most frequently queried column or columns, or an auto-increment value if there is no obvious primary key.

  • Using joins wherever data is pulled from multiple tables based on identical ID values from those tables. For fast join performance, define foreign keys on the join columns, and declare those columns with the same data type in each table. Adding foreign keys ensures that referenced columns are indexed, which can improve performance. Foreign keys also propagate deletes or updates to all affected tables, and prevent insertion of data in a child table if the corresponding IDs are not present in the parent table.

  • Turning off autocommit. Committing hundreds of times a second puts a cap on performance (limited by the write speed of your storage device).

  • Grouping sets of related DML operations into transactions, by bracketing them with START TRANSACTION and COMMIT statements. While you don't want to commit too often, you also don't want to issue huge batches of INSERT, UPDATE, or DELETE statements that run for hours without committing.

  • Not using LOCK TABLES statements. InnoDB can handle multiple sessions all reading and writing to the same table at once, without sacrificing reliability or high performance. To get exclusive write access to a set of rows, use the SELECT ... FOR UPDATE syntax to lock just the rows you intend to update.

  • Enabling the innodb_file_per_table option to put the data and indexes for individual tables into separate files, instead of in a single giant system tablespace. This setting is required to use some of the other features, such as table compression and fast truncation.

    The innodb_file_per_table option is enabled by default as of MySQL 5.6.6.

  • Evaluating whether your data and access patterns benefit from the InnoDB table compression feature (ROW_FORMAT=COMPRESSED) on the CREATE TABLE statement. You can compress InnoDB tables without sacrificing read/write capability.

  • Running your server with the option --sql_mode=NO_ENGINE_SUBSTITUTION to prevent tables being created with a different storage engine if there is an issue with the engine specified in the ENGINE= clause of CREATE TABLE.

15.1.3 Checking InnoDB Availability

To determine whether your server supports InnoDB:

  • Issue the command SHOW ENGINES; to see all the different MySQL storage engines. Look for DEFAULT in the InnoDB line. Alternatively, query the INFORMATION_SCHEMA ENGINES table. (Now that InnoDB is the default MySQL storage engine, only very specialized environments might not support it.)

  • If InnoDB is not present, you have a mysqld binary that was compiled without InnoDB support and you need to get a different one.

  • If InnoDB is present but disabled, go back through your startup options and configuration file and get rid of any skip-innodb option.

15.1.4 Testing and Benchmarking with InnoDB

If InnoDB is not your default storage engine, you can determine if your database server or applications work correctly with InnoDB by restarting the server with --default-storage-engine=InnoDB defined on the command line or with default-storage-engine=innodb defined in the [mysqld] section of the my.cnf configuration file.

Since changing the default storage engine only affects new tables as they are created, run all your application installation and setup steps to confirm that everything installs properly. Then exercise all the application features to make sure all the data loading, editing, and querying features work. If a table relies on some MyISAM-specific feature, you'll receive an error; add the ENGINE=MyISAM clause to the CREATE TABLE statement to avoid the error.

If you did not make a deliberate decision about the storage engine, and you just want to preview how certain tables work when they're created under InnoDB, issue the command ALTER TABLE table_name ENGINE=InnoDB; for each table. Or, to run test queries and other statements without disturbing the original table, make a copy like so:

CREATE TABLE InnoDB_Table (...) ENGINE=InnoDB AS SELECT * FROM MyISAM_Table;

To get a true idea of the performance with a full application under a realistic workload, install the latest MySQL server and run benchmarks.

Test the full application lifecycle, from installation, through heavy usage, and server restart. Kill the server process while the database is busy to simulate a power failure, and verify that the data is recovered successfully when you restart the server.

Test any replication configurations, especially if you use different MySQL versions and options on the master and the slaves.

15.1.5 Turning Off InnoDB

Oracle recommends InnoDB as the preferred storage engine for typical database applications, from single-user wikis and blogs running on a local system, to high-end applications pushing the limits of performance. In MySQL 5.7, InnoDB is the default storage engine for new tables.

Important

InnoDB cannot be disabled. The --skip-innodb option is deprecated and has no effect, and its use results in a warning. It will be removed in a future MySQL release. This also applies to its synonyms (--innodb=OFF, --disable-innodb, and so forth).

15.2 InnoDB and the ACID Model

The ACID model is a set of database design principles that emphasize aspects of reliability that are important for business data and mission-critical applications. MySQL includes components such as the InnoDB storage engine that adhere closely to the ACID model, so that data is not corrupted and results are not distorted by exceptional conditions such as software crashes and hardware malfunctions. When you rely on ACID-compliant features, you do not need to reinvent the wheel of consistency checking and crash recovery mechanisms. In cases where you have additional software safeguards, ultra-reliable hardware, or an application that can tolerate a small amount of data loss or inconsistency, you can adjust MySQL settings to trade some of the ACID reliability for greater performance or throughput.

The following sections discuss how MySQL features, in particular the InnoDB storage engine, interact with the categories of the ACID model:

  • A: atomicity.

  • C: consistency.

  • I:: isolation.

  • D: durability.

Atomicity

The atomicity aspect of the ACID model mainly involves InnoDB transactions. Related MySQL features include:

  • Autocommit setting.

  • COMMIT statement.

  • ROLLBACK statement.

  • Operational data from the INFORMATION_SCHEMA tables.

Consistency

The consistency aspect of the ACID model mainly involves internal InnoDB processing to protect data from crashes. Related MySQL features include:

Isolation

The isolation aspect of the ACID model mainly involves InnoDB transactions, in particular the isolation level that applies to each transaction. Related MySQL features include:

  • Autocommit setting.

  • SET ISOLATION LEVEL statement.

  • The low-level details of InnoDB locking. During performance tuning, you see these details through INFORMATION_SCHEMA tables.

Durability

The durability aspect of the ACID model involves MySQL software features interacting with your particular hardware configuration. Because of the many possibilities depending on the capabilities of your CPU, network, and storage devices, this aspect is the most complicated to provide concrete guidelines for. (And those guidelines might take the form of buy new hardware.) Related MySQL features include:

  • InnoDB doublewrite buffer, turned on and off by the innodb_doublewrite configuration option.

  • Configuration option innodb_flush_log_at_trx_commit.

  • Configuration option sync_binlog.

  • Configuration option innodb_file_per_table.

  • Write buffer in a storage device, such as a disk drive, SSD, or RAID array.

  • Battery-backed cache in a storage device.

  • The operating system used to run MySQL, in particular its support for the fsync() system call.

  • Uninterruptible power supply (UPS) protecting the electrical power to all computer servers and storage devices that run MySQL servers and store MySQL data.

  • Your backup strategy, such as frequency and types of backups, and backup retention periods.

  • For distributed or hosted data applications, the particular characteristics of the data centers where the hardware for the MySQL servers is located, and network connections between the data centers.

15.3 InnoDB Multi-Versioning

InnoDB is a multi-versioned storage engine: it keeps information about old versions of changed rows, to support transactional features such as concurrency and rollback. This information is stored in the tablespace in a data structure called a rollback segment (after an analogous data structure in Oracle). InnoDB uses the information in the rollback segment to perform the undo operations needed in a transaction rollback. It also uses the information to build earlier versions of a row for a consistent read.

Internally, InnoDB adds three fields to each row stored in the database. A 6-byte DB_TRX_ID field indicates the transaction identifier for the last transaction that inserted or updated the row. Also, a deletion is treated internally as an update where a special bit in the row is set to mark it as deleted. Each row also contains a 7-byte DB_ROLL_PTR field called the roll pointer. The roll pointer points to an undo log record written to the rollback segment. If the row was updated, the undo log record contains the information necessary to rebuild the content of the row before it was updated. A 6-byte DB_ROW_ID field contains a row ID that increases monotonically as new rows are inserted. If InnoDB generates a clustered index automatically, the index contains row ID values. Otherwise, the DB_ROW_ID column does not appear in any index.

Undo logs in the rollback segment are divided into insert and update undo logs. Insert undo logs are needed only in transaction rollback and can be discarded as soon as the transaction commits. Update undo logs are used also in consistent reads, but they can be discarded only after there is no transaction present for which InnoDB has assigned a snapshot that in a consistent read could need the information in the update undo log to build an earlier version of a database row.

Commit your transactions regularly, including those transactions that issue only consistent reads. Otherwise, InnoDB cannot discard data from the update undo logs, and the rollback segment may grow too big, filling up your tablespace.

The physical size of an undo log record in the rollback segment is typically smaller than the corresponding inserted or updated row. You can use this information to calculate the space needed for your rollback segment.

In the InnoDB multi-versioning scheme, a row is not physically removed from the database immediately when you delete it with an SQL statement. InnoDB only physically removes the corresponding row and its index records when it discards the update undo log record written for the deletion. This removal operation is called a purge, and it is quite fast, usually taking the same order of time as the SQL statement that did the deletion.

If you insert and delete rows in smallish batches at about the same rate in the table, the purge thread can start to lag behind and the table can grow bigger and bigger because of all the dead rows, making everything disk-bound and very slow. In such a case, throttle new row operations, and allocate more resources to the purge thread by tuning the innodb_max_purge_lag system variable. See Section 15.14, “InnoDB Startup Options and System Variables” for more information.

Multi-Versioning and Secondary Indexes

InnoDB multiversion concurrency control (MVCC) treats secondary indexes differently than clustered indexes. Records in a clustered index are updated in-place, and their hidden system columns point undo log entries from which earlier versions of records can be reconstructed. Unlike clustered index records, secondary index records do not contain hidden system columns nor are they updated in-place.

When a secondary index column is updated, old secondary index records are delete-marked, new records are inserted, and delete-marked records are eventually purged. When a secondary index record is delete-marked or the secondary index page is updated by a newer transaction, InnoDB looks up the database record in the clustered index. In the clustered index, the record's DB_TRX_ID is checked, and the correct version of the record is retrieved from the undo log if the record was modified after the reading transaction was initiated.

If a secondary index record is marked for deletion or the secondary index page is updated by a newer transaction, the covering index technique is not used. Instead of returning values from the index structure, InnoDB looks up the record in the clustered index.

However, if the index condition pushdown (ICP) optimization is enabled, and parts of the WHERE condition can be evaluated using only fields from the index, the MySQL server still pushes this part of the WHERE condition down to the storage engine where it is evaluated using the index. If no matching records are found, the clustered index lookup is avoided. If matching records are found, even among delete-marked records, InnoDB looks up the record in the clustered index.

15.4 InnoDB Architecture

This section provides an introduction to the major components of the InnoDB storage engine architecture.

15.4.1 Buffer Pool

The buffer pool is an area in main memory where InnoDB caches table and index data as data is accessed. The buffer pool allows frequently used data to be processed directly from memory, which speeds up processing. On dedicated database servers, up to 80% of physical memory is often assigned to the InnoDB buffer pool.

For efficiency of high-volume read operations, the buffer pool is divided into pages that can potentially hold multiple rows. For efficiency of cache management, the buffer pool is implemented as a linked list of pages; data that is rarely used is aged out of the cache, using a variation of the LRU algorithm.

For more information, see Section 15.6.3.1, “The InnoDB Buffer Pool”, and Section 15.6.3, “InnoDB Buffer Pool Configuration”.

15.4.2 Change Buffer

The change buffer is a special data structure that caches changes to secondary index pages when affected pages are not in the buffer pool. The buffered changes, which may result from INSERT, UPDATE, or DELETE operations (DML), are merged later when the pages are loaded into the buffer pool by other read operations.

Unlike clustered indexes, secondary indexes are usually non-unique, and inserts into secondary indexes happen in a relatively random order. Similarly, deletes and updates may affect secondary index pages that are not adjacently located in an index tree. Merging cached changes at a later time, when affected pages are read into the buffer pool by other operations, avoids substantial random access I/O that would be required to read-in secondary index pages from disk.

Periodically, the purge operation that runs when the system is mostly idle, or during a slow shutdown, writes the updated index pages to disk. The purge operation can write disk blocks for a series of index values more efficiently than if each value were written to disk immediately.

Change buffer merging may take several hours when there are numerous secondary indexes to update and many affected rows. During this time, disk I/O is increased, which can cause a significant slowdown for disk-bound queries. Change buffer merging may also continue to occur after a transaction is committed. In fact, change buffer merging may continue to occur after a server shutdown and restart (see Section 15.21.2, “Forcing InnoDB Recovery” for more information).

In memory, the change buffer occupies part of the InnoDB buffer pool. On disk, the change buffer is part of the system tablespace, so that index changes remain buffered across database restarts.

The type of data cached in the change buffer is governed by the innodb_change_buffering configuration option. For more information, see Section 15.6.5, “Configuring InnoDB Change Buffering”. You can also configure the maximum change buffer size. For more information, see Section 15.6.5.1, “Configuring the Change Buffer Maximum Size”.

Monitoring the Change Buffer

The following options are available for change buffer monitoring:

  • InnoDB Standard Monitor output includes status information for the change buffer. To view monitor data, issue the SHOW ENGINE INNODB STATUS command.

    mysql> SHOW ENGINE INNODB STATUS\G

    Change buffer status information is located under the INSERT BUFFER AND ADAPTIVE HASH INDEX heading and appears similar to the following:

    -------------------------------------
    INSERT BUFFER AND ADAPTIVE HASH INDEX
    -------------------------------------
    Ibuf: size 1, free list len 0, seg size 2, 0 merges
    merged operations:
     insert 0, delete mark 0, delete 0
    discarded operations:
     insert 0, delete mark 0, delete 0
    Hash table size 4425293, used cells 32, node heap has 1 buffer(s)
    13577.57 hash searches/s, 202.47 non-hash searches/s

    For more information, see Section 15.17.3, “InnoDB Standard Monitor and Lock Monitor Output”.

  • The INFORMATION_SCHEMA.INNODB_METRICS table provides most of the data points found in InnoDB Standard Monitor output, plus other data points. To view change buffer metrics and a description of each, issue the following query:

    mysql> SELECT NAME, COMMENT FROM INFORMATION_SCHEMA.INNODB_METRICS WHERE NAME LIKE '%ibuf%'\G

    For INNODB_METRICS table usage information, see Section 15.15.6, “InnoDB INFORMATION_SCHEMA Metrics Table”.

  • The INFORMATION_SCHEMA.INNODB_BUFFER_PAGE table provides metadata about each page in the buffer pool, including change buffer index and change buffer bitmap pages. Change buffer pages are identified by PAGE_TYPE. IBUF_INDEX is the page type for change buffer index pages, and IBUF_BITMAP is the page type for change buffer bitmap pages.

    Warning

    Querying the INNODB_BUFFER_PAGE table can introduce significant performance overhead. To avoid impacting performance, reproduce the issue you want to investigate on a test instance and run your queries on the test instance.

    For example, you can query the INNODB_BUFFER_PAGE table to determine the approximate number of IBUF_INDEX and IBUF_BITMAP pages as a percentage of total buffer pool pages.

    SELECT
    (SELECT COUNT(*) FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE
    WHERE PAGE_TYPE LIKE 'IBUF%'
    ) AS change_buffer_pages,
    (
    SELECT COUNT(*)
    FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE
    ) AS total_pages,
    (
    SELECT ((change_buffer_pages/total_pages)*100)
    ) AS change_buffer_page_percentage;
    +---------------------+-------------+-------------------------------+
    | change_buffer_pages | total_pages | change_buffer_page_percentage |
    +---------------------+-------------+-------------------------------+
    |                  25 |        8192 |                        0.3052 |
    +---------------------+-------------+-------------------------------+

    For information about other data provided by the INNODB_BUFFER_PAGE table, see Section 24.31.1, “The INFORMATION_SCHEMA INNODB_BUFFER_PAGE Table”. For related usage information, see Section 15.15.5, “InnoDB INFORMATION_SCHEMA Buffer Pool Tables”.

  • Performance Schema provides change buffer mutex wait instrumentation for advanced performance monitoring. To view change buffer instrumentation, issue the following query:

    mysql> SELECT * FROM performance_schema.setup_instruments
    WHERE NAME LIKE '%wait/synch/mutex/innodb/ibuf%';
    +-------------------------------------------------------+---------+-------+
    | NAME                                                  | ENABLED | TIMED |
    +-------------------------------------------------------+---------+-------+
    | wait/synch/mutex/innodb/ibuf_bitmap_mutex             | YES     | YES   |
    | wait/synch/mutex/innodb/ibuf_mutex                    | YES     | YES   |
    | wait/synch/mutex/innodb/ibuf_pessimistic_insert_mutex | YES     | YES   |
    +-------------------------------------------------------+---------+-------+

    For information about monitoring InnoDB mutex waits, see Section 15.16.2, “Monitoring InnoDB Mutex Waits Using Performance Schema”.

15.4.3 Adaptive Hash Index

The adaptive hash index (AHI) lets InnoDB perform more like an in-memory database on systems with appropriate combinations of workload and ample memory for the buffer pool, without sacrificing any transactional features or reliability. This feature is enabled by the innodb_adaptive_hash_index option, or turned off by --skip-innodb_adaptive_hash_index at server startup.

Based on the observed pattern of searches, MySQL builds a hash index using a prefix of the index key. The prefix of the key can be any length, and it may be that only some of the values in the B-tree appear in the hash index. Hash indexes are built on demand for those pages of the index that are often accessed.

If a table fits almost entirely in main memory, a hash index can speed up queries by enabling direct lookup of any element, turning the index value into a sort of pointer. InnoDB has a mechanism that monitors index searches. If InnoDB notices that queries could benefit from building a hash index, it does so automatically.

With some workloads, the speedup from hash index lookups greatly outweighs the extra work to monitor index lookups and maintain the hash index structure. Sometimes, the read/write lock that guards access to the adaptive hash index can become a source of contention under heavy workloads, such as multiple concurrent joins. Queries with LIKE operators and % wildcards also tend not to benefit from the AHI. For workloads where the adaptive hash index is not needed, turning it off reduces unnecessary performance overhead. Because it is difficult to predict in advance whether this feature is appropriate for a particular system, consider running benchmarks with it both enabled and disabled, using a realistic workload. The architectural changes in MySQL 5.6 and higher make more workloads suitable for disabling the adaptive hash index than in earlier releases, although it is still enabled by default.

In MySQL 5.7, the adaptive hash index search system is partitioned. Each index is bound to a specific partition, and each partition is protected by a separate latch. Partitioning is controlled by the innodb_adaptive_hash_index_parts configuration option. In earlier releases, the adaptive hash index search system was protected by a single latch which could become a point of contention under heavy workloads. The innodb_adaptive_hash_index_parts option is set to 8 by default. The maximum setting is 512.

The hash index is always built based on an existing B-tree index on the table. InnoDB can build a hash index on a prefix of any length of the key defined for the B-tree, depending on the pattern of searches that InnoDB observes for the B-tree index. A hash index can be partial, covering only those pages of the index that are often accessed.

You can monitor the use of the adaptive hash index and the contention for its use in the SEMAPHORES section of the output of the SHOW ENGINE INNODB STATUS command. If you see many threads waiting on an RW-latch created in btr0sea.c, then it might be useful to disable adaptive hash indexing.

For more information about the performance characteristics of hash indexes, see Section 9.3.8, “Comparison of B-Tree and Hash Indexes”.

15.4.4 Redo Log Buffer

The redo log buffer is the memory area that holds data to be written to the redo log. Redo log buffer size is defined by the innodb_log_buffer_size configuration option. The redo log buffer is periodically flushed to the log file on disk. A large redo log buffer enables large transactions to run without the need to write redo log to disk before the transactions commit. Thus, if you have transactions that update, insert, or delete many rows, making the log buffer larger saves disk I/O.

The innodb_flush_log_at_trx_commit option controls how the contents of the redo log buffer are written to the log file. The innodb_flush_log_at_timeout option controls redo log flushing frequency.

15.4.5 System Tablespace

The InnoDB system tablespace contains the InnoDB data dictionary (metadata for InnoDB-related objects) and is the storage area for the doublewrite buffer, the change buffer, and undo logs. The system tablespace also contains table and index data for any user-created tables that are created in the system tablespace. The system tablespace is considered a shared tablespace since it is shared by multiple tables.

The system tablespace is represented by one or more data files. By default, one system data file, named ibdata1, is created in the MySQL data directory. The size and number of system data files is controlled by the innodb_data_file_path startup option.

For related information, see Section 15.6.1, “InnoDB Startup Configuration”, and Section 15.7.1, “Resizing the InnoDB System Tablespace”.

15.4.6 InnoDB Data Dictionary

The InnoDB data dictionary is comprised of internal system tables that contain metadata used to keep track of objects such as tables, indexes, and table columns. The metadata is physically located in the InnoDB system tablespace. For historical reasons, data dictionary metadata overlaps to some degree with information stored in InnoDB table metadata files (.frm files).

15.4.7 Doublewrite Buffer

The doublewrite buffer is a storage area located in the system tablespace where InnoDB writes pages that are flushed from the InnoDB buffer pool, before the pages are written to their proper positions in the data file. Only after flushing and writing pages to the doublewrite buffer, does InnoDB write pages to their proper positions. If there is an operating system, storage subsystem, or mysqld process crash in the middle of a page write, InnoDB can later find a good copy of the page from the doublewrite buffer during crash recovery.

Although data is always written twice, the doublewrite buffer does not require twice as much I/O overhead or twice as many I/O operations. Data is written to the doublewrite buffer itself as a large sequential chunk, with a single fsync() call to the operating system.

The doublewrite buffer is enabled by default in most cases. To disable the doublewrite buffer, set innodb_doublewrite to 0.

If system tablespace files (ibdata files) are located on Fusion-io devices that support atomic writes, doublewrite buffering is automatically disabled and Fusion-io atomic writes are used for all data files. Because the doublewrite buffer setting is global, doublewrite buffering is also disabled for data files residing on non-Fusion-io hardware. This feature is only supported on Fusion-io hardware and is only enabled for Fusion-io NVMFS on Linux. To take full advantage of this feature, an innodb_flush_method setting of O_DIRECT is recommended.

15.4.8 Undo Logs

An undo log is a collection of undo log records associated with a single transaction. An undo log record contains information about how to undo the latest change by a transaction to a clustered index record. If another transaction needs to see the original data (as part of a consistent read operation), the unmodified data is retrieved from the undo log records. Undo logs exist within undo log segments, which are contained within rollback segments. By default, rollback segments are physically part of the system tablespace. However, rollback segments can reside in separate undo tablespaces. For more information, see Section 15.7.7, “Storing InnoDB Undo Logs in Separate Tablespaces”. For information about multi-versioning, see Section 15.3, “InnoDB Multi-Versioning”.

InnoDB supports 128 rollback segments, 32 of which are reserved as non-redo rollback segments for temporary table transactions. Each transaction that updates a temporary table (excluding read-only transactions) is assigned two rollback segments, one redo-enabled rollback segment and one non-redo rollback segment. Read-only transactions are only assigned non-redo rollback segments, as read-only transactions are only permitted to modify temporary tables.

This leaves 96 available rollback segments, each of which supports up to 1023 concurrent data-modifying transactions, for a total limit of approximately 96K concurrent data-modifying transactions. The 96K limit assumes that transactions do not modify temporary tables. If all data-modifying transactions also modify temporary tables, the total limit is approximately 32K concurrent data modifying transactions. For more information about rollback segments that are reserved for temporary table transactions, see Section 15.4.12.1, “InnoDB Temporary Table Undo Logs”.

The innodb_undo_logs option defines the number of rollback segments used by InnoDB.

15.4.9 File-Per-Table Tablespaces

A file-per-table tablespace is a single-table tablespace that is created in its own data file rather than in the system tablespace. Tables are created in file-per-table tablespaces when the innodb_file_per_table option is enabled. Otherwise, InnoDB tables are created in the system tablespace. Each file-per-table tablespace is represented by a single .ibd data file, which is created in the database directory by default.

File per-table tablespaces support DYNAMIC and COMPRESSED row formats which support features such as off-page storage for variable length data and table compression. For information about these features, and about other advantages of file-per-table tablespaces, see Section 15.7.4, “InnoDB File-Per-Table Tablespaces”.

15.4.10 General Tablespaces

A shared InnoDB tablespace created using CREATE TABLESPACE syntax. General tablespaces can be created outside of the MySQL data directory, are capable of holding multiple tables, and support tables of all row formats.

Tables are added to a general tablespace using CREATE TABLE tbl_name ... TABLESPACE [=] tablespace_name or ALTER TABLE tbl_name TABLESPACE [=] tablespace_name syntax.

For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

15.4.11 Undo Tablespace

An undo tablespace comprises one or more files that contain undo logs. Undo logs exist within undo log segments, which are contained within rollback segments. By default, rollback segments are physically part of the system tablespace. However, rollback segments can reside in separate undo tablespaces. An undo tablespace is created when the undo log is separated from the system tablespace using the innodb_undo_tablespaces and innodb_undo_directory configuration options.

For more information, see Section 15.7.7, “Storing InnoDB Undo Logs in Separate Tablespaces”.

15.4.12 Temporary Tablespace

The temporary tablespace is a tablespace for non-compressed InnoDB temporary tables and related objects. The configuration option, innodb_temp_data_file_path, defines a relative path for the temporary tablespace data file. If innodb_temp_data_file_path is not defined, a single auto-extending 12MB data file named ibtmp1 is created in the data directory. The temporary tablespace is recreated on each server start and receives a dynamically generated space ID, which helps avoid conflicts with existing space IDs. The temporary tablespace cannot reside on a raw device. Startup is refused if the temporary tablespace cannot be created.

The temporary tablespace is removed on normal shutdown or on an aborted initialization. The temporary tablespace is not removed when a crash occurs. In this case, the database administrator may remove the temporary tablespace manually or restart the server with the same configuration, which removes and recreates the temporary tablespace.

15.4.12.1 InnoDB Temporary Table Undo Logs

Temporary table undo logs are used for temporary tables and related objects. This type of undo log is not a redo log, as temporary tables are not recovered during crash recovery and do not require redo logs. Temporary table undo logs are, however, used for rollback while the server is running. This special type of non-redo undo log benefits performance by avoiding redo logging I/O for temporary tables and related objects. Temporary table undo logs reside in the temporary tablespace. The default temporary tablespace file, ibtmp1, is located in the data directory by default and is always recreated on server startup. A user defined location for the temporary tablespace file can be specified by setting innodb_temp_data_file_path.

32 rollback segments are reserved for temporary table undo logs for transactions that modify temporary tables and related objects, which means that the maximum number of rollback segments available for data-modifying transactions that generate undo records is 96. With 96 available rollback segments, the limit on concurrent data-modifying transactions is 96K. For more information see Section 15.3, “InnoDB Multi-Versioning” and Section 15.8.8, “Limits on InnoDB Tables”.

15.4.13 Redo Log

The redo log is a disk-based data structure used during crash recovery to correct data written by incomplete transactions. During normal operations, the redo log encodes requests to change InnoDB table data that result from SQL statements or low-level API calls. Modifications that did not finish updating the data files before an unexpected shutdown are replayed automatically during initialization, and before the connections are accepted. For information about the role of the redo log in crash recovery, see Section 15.18.2, “InnoDB Recovery”.

By default, the redo log is physically represented on disk as a set of files, named ib_logfile0 and ib_logfile1. MySQL writes to the redo log files in a circular fashion. Data in the redo log is encoded in terms of records affected; this data is collectively referred to as redo. The passage of data through the redo log is represented by an ever-increasing LSN value.

For related information, see:

15.4.13.1 Group Commit for Redo Log Flushing

InnoDB, like any other ACID-compliant database engine, flushes the redo log of a transaction before it is committed. InnoDB uses group commit functionality to group multiple such flush requests together to avoid one flush for each commit. With group commit, InnoDB issues a single write to the log file to perform the commit action for multiple user transactions that commit at about the same time, significantly improving throughput.

For more information about performance of COMMIT and other transactional operations, see Section 9.5.2, “Optimizing InnoDB Transaction Management”.

15.5 InnoDB Locking and Transaction Model

To implement a large-scale, busy, or highly reliable database application, to port substantial code from a different database system, or to tune MySQL performance, it is important to understand InnoDB locking and the InnoDB transaction model.

This section discusses several topics related to InnoDB locking and the InnoDB transaction model with which you should be familiar.

15.5.1 InnoDB Locking

This section describes lock types used by InnoDB.

Shared and Exclusive Locks

InnoDB implements standard row-level locking where there are two types of locks, shared (S) locks and exclusive (X) locks.

  • A shared (S) lock permits the transaction that holds the lock to read a row.

  • An exclusive (X) lock permits the transaction that holds the lock to update or delete a row.

If transaction T1 holds a shared (S) lock on row r, then requests from some distinct transaction T2 for a lock on row r are handled as follows:

  • A request by T2 for an S lock can be granted immediately. As a result, both T1 and T2 hold an S lock on r.

  • A request by T2 for an X lock cannot be granted immediately.

If a transaction T1 holds an exclusive (X) lock on row r, a request from some distinct transaction T2 for a lock of either type on r cannot be granted immediately. Instead, transaction T2 has to wait for transaction T1 to release its lock on row r.

Intention Locks

InnoDB supports multiple granularity locking which permits coexistence of row-level locks and locks on entire tables. To make locking at multiple granularity levels practical, additional types of locks called intention locks are used. Intention locks are table-level locks in InnoDB that indicate which type of lock (shared or exclusive) a transaction requires later for a row in that table. There are two types of intention locks used in InnoDB (assume that transaction T has requested a lock of the indicated type on table t):

For example, SELECT ... LOCK IN SHARE MODE sets an IS lock and SELECT ... FOR UPDATE sets an IX lock.

The intention locking protocol is as follows:

  • Before a transaction can acquire an S lock on a row in table t, it must first acquire an IS or stronger lock on t.

  • Before a transaction can acquire an X lock on a row, it must first acquire an IX lock on t.

These rules can be conveniently summarized by means of the following lock type compatibility matrix.

 XIXSIS
XConflictConflictConflictConflict
IXConflictCompatibleConflictCompatible
SConflictConflictCompatibleCompatible
ISConflictCompatibleCompatibleCompatible

A lock is granted to a requesting transaction if it is compatible with existing locks, but not if it conflicts with existing locks. A transaction waits until the conflicting existing lock is released. If a lock request conflicts with an existing lock and cannot be granted because it would cause deadlock, an error occurs.

Thus, intention locks do not block anything except full table requests (for example, LOCK TABLES ... WRITE). The main purpose of IX and IS locks is to show that someone is locking a row, or going to lock a row in the table.

Transaction data for an intention lock appears similar to the following in SHOW ENGINE INNODB STATUS and InnoDB monitor output:

TABLE LOCK table `test`.`t` trx id 10080 lock mode IX

Record Locks

A record lock is a lock on an index record. For example, SELECT c1 FROM t WHERE c1 = 10 FOR UPDATE; prevents any other transaction from inserting, updating, or deleting rows where the value of t.c1 is 10.

Record locks always lock index records, even if a table is defined with no indexes. For such cases, InnoDB creates a hidden clustered index and uses this index for record locking. See Section 15.8.9, “Clustered and Secondary Indexes”.

Transaction data for a record lock appears similar to the following in SHOW ENGINE INNODB STATUS and InnoDB monitor output:

RECORD LOCKS space id 58 page no 3 n bits 72 index `PRIMARY` of table `test`.`t` 
trx id 10078 lock_mode X locks rec but not gap
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
 0: len 4; hex 8000000a; asc     ;;
 1: len 6; hex 00000000274f; asc     'O;;
 2: len 7; hex b60000019d0110; asc        ;;

Gap Locks

A gap lock is a lock on a gap between index records, or a lock on the gap before the first or after the last index record. For example, SELECT c1 FROM t WHERE c1 BETWEEN 10 and 20 FOR UPDATE; prevents other transactions from inserting a value of 15 into column t.c1, whether or not there was already any such value in the column, because the gaps between all existing values in the range are locked.

A gap might span a single index value, multiple index values, or even be empty.

Gap locks are part of the tradeoff between performance and concurrency, and are used in some transaction isolation levels and not others.

Gap locking is not needed for statements that lock rows using a unique index to search for a unique row. (This does not include the case that the search condition includes only some columns of a multiple-column unique index; in that case, gap locking does occur.) For example, if the id column has a unique index, the following statement uses only an index-record lock for the row having id value 100 and it does not matter whether other sessions insert rows in the preceding gap:

SELECT * FROM child WHERE id = 100;

If id is not indexed or has a nonunique index, the statement does lock the preceding gap.

It is also worth noting here that conflicting locks can be held on a gap by different transactions. For example, transaction A can hold a shared gap lock (gap S-lock) on a gap while transaction B holds an exclusive gap lock (gap X-lock) on the same gap. The reason conflicting gap locks are allowed is that if a record is purged from an index, the gap locks held on the record by different transactions must be merged.

Gap locks in InnoDB are purely inhibitive, which means they only stop other transactions from inserting to the gap. They do not prevent different transactions from taking gap locks on the same gap. Thus, a gap X-lock has the same effect as a gap S-lock.

Gap locking can be disabled explicitly. This occurs if you change the transaction isolation level to READ COMMITTED or enable the innodb_locks_unsafe_for_binlog system variable (which is now deprecated). Under these circumstances, gap locking is disabled for searches and index scans and is used only for foreign-key constraint checking and duplicate-key checking.

There are also other effects of using the READ COMMITTED isolation level or enabling innodb_locks_unsafe_for_binlog. Record locks for nonmatching rows are released after MySQL has evaluated the WHERE condition. For UPDATE statements, InnoDB does a semi-consistent read, such that it returns the latest committed version to MySQL so that MySQL can determine whether the row matches the WHERE condition of the UPDATE.

Next-Key Locks

A next-key lock is a combination of a record lock on the index record and a gap lock on the gap before the index record.

InnoDB performs row-level locking in such a way that when it searches or scans a table index, it sets shared or exclusive locks on the index records it encounters. Thus, the row-level locks are actually index-record locks. A next-key lock on an index record also affects the gap before that index record. That is, a next-key lock is an index-record lock plus a gap lock on the gap preceding the index record. If one session has a shared or exclusive lock on record R in an index, another session cannot insert a new index record in the gap immediately before R in the index order.

Suppose that an index contains the values 10, 11, 13, and 20. The possible next-key locks for this index cover the following intervals, where a round bracket denotes exclusion of the interval endpoint and a square bracket denotes inclusion of the endpoint:

(negative infinity, 10]
(10, 11]
(11, 13]
(13, 20]
(20, positive infinity)

For the last interval, the next-key lock locks the gap above the largest value in the index and the supremum pseudo-record having a value higher than any value actually in the index. The supremum is not a real index record, so, in effect, this next-key lock locks only the gap following the largest index value.

By default, InnoDB operates in REPEATABLE READ transaction isolation level. In this case, InnoDB uses next-key locks for searches and index scans, which prevents phantom rows (see Section 15.5.4, “Phantom Rows”).

Transaction data for a next-key lock appears similar to the following in SHOW ENGINE INNODB STATUS and InnoDB monitor output:

RECORD LOCKS space id 58 page no 3 n bits 72 index `PRIMARY` of table `test`.`t` 
trx id 10080 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
 0: len 4; hex 8000000a; asc     ;;
 1: len 6; hex 00000000274f; asc     'O;;
 2: len 7; hex b60000019d0110; asc        ;;

Insert Intention Locks

An insert intention lock is a type of gap lock set by INSERT operations prior to row insertion. This lock signals the intent to insert in such a way that multiple transactions inserting into the same index gap need not wait for each other if they are not inserting at the same position within the gap. Suppose that there are index records with values of 4 and 7. Separate transactions that attempt to insert values of 5 and 6, respectively, each lock the gap between 4 and 7 with insert intention locks prior to obtaining the exclusive lock on the inserted row, but do not block each other because the rows are nonconflicting.

The following example demonstrates a transaction taking an insert intention lock prior to obtaining an exclusive lock on the inserted record. The example involves two clients, A and B.

Client A creates a table containing two index records (90 and 102) and then starts a transaction that places an exclusive lock on index records with an ID greater than 100. The exclusive lock includes a gap lock before record 102:

mysql> CREATE TABLE child (id int(11) NOT NULL, PRIMARY KEY(id)) ENGINE=InnoDB;
mysql> INSERT INTO child (id) values (90),(102);

mysql> START TRANSACTION;
mysql> SELECT * FROM child WHERE id > 100 FOR UPDATE;
+-----+
| id  |
+-----+
| 102 |
+-----+

Client B begins a transaction to insert a record into the gap. The transaction takes an insert intention lock while it waits to obtain an exclusive lock.

mysql> START TRANSACTION;
mysql> INSERT INTO child (id) VALUES (101);

Transaction data for an insert intention lock appears similar to the following in SHOW ENGINE INNODB STATUS and InnoDB monitor output:

RECORD LOCKS space id 31 page no 3 n bits 72 index `PRIMARY` of table `test`.`child`
trx id 8731 lock_mode X locks gap before rec insert intention waiting
Record lock, heap no 3 PHYSICAL RECORD: n_fields 3; compact format; info bits 0
 0: len 4; hex 80000066; asc    f;;
 1: len 6; hex 000000002215; asc     " ;;
 2: len 7; hex 9000000172011c; asc     r  ;;...

AUTO-INC Locks

An AUTO-INC lock is a special table-level lock taken by transactions inserting into tables with AUTO_INCREMENT columns. In the simplest case, if one transaction is inserting values into the table, any other transactions must wait to do their own inserts into that table, so that rows inserted by the first transaction receive consecutive primary key values.

The innodb_autoinc_lock_mode configuration option controls the algorithm used for auto-increment locking. It allows you to choose how to trade off between predictable sequences of auto-increment values and maximum concurrency for insert operations.

For more information, see Section 15.8.6, “AUTO_INCREMENT Handling in InnoDB”.

Predicate Locks for Spatial Indexes

InnoDB supports SPATIAL indexing of columns containing spatial columns (see Section 12.5.3.5, “Optimizing Spatial Analysis”).

To handle locking for operations involving SPATIAL indexes, next-key locking does not work well to support REPEATABLE READ or SERIALIZABLE transaction isolation levels. There is no absolute ordering concept in multidimensional data, so it is not clear which is the next key.

To enable support of isolation levels for tables with SPATIAL indexes, InnoDB uses predicate locks. A SPATIAL index contains minimum bounding rectangle (MBR) values, so InnoDB enforces consistent read on the index by setting a predicate lock on the MBR value used for a query. Other transactions cannot insert or modify a row that would match the query condition.

15.5.2 InnoDB Transaction Model

In the InnoDB transaction model, the goal is to combine the best properties of a multi-versioning database with traditional two-phase locking. InnoDB performs locking at the row level and runs queries as nonlocking consistent reads by default, in the style of Oracle. The lock information in InnoDB is stored space-efficiently so that lock escalation is not needed. Typically, several users are permitted to lock every row in InnoDB tables, or any random subset of the rows, without causing InnoDB memory exhaustion.

15.5.2.1 Transaction Isolation Levels

Transaction isolation is one of the foundations of database processing. Isolation is the I in the acronym ACID; the isolation level is the setting that fine-tunes the balance between performance and reliability, consistency, and reproducibility of results when multiple transactions are making changes and performing queries at the same time.

InnoDB offers all four transaction isolation levels described by the SQL:1992 standard: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE. The default isolation level for InnoDB is REPEATABLE READ.

A user can change the isolation level for a single session or for all subsequent connections with the SET TRANSACTION statement. To set the server's default isolation level for all connections, use the --transaction-isolation option on the command line or in an option file. For detailed information about isolation levels and level-setting syntax, see Section 14.3.6, “SET TRANSACTION Syntax”.

InnoDB supports each of the transaction isolation levels described here using different locking strategies. You can enforce a high degree of consistency with the default REPEATABLE READ level, for operations on crucial data where ACID compliance is important. Or you can relax the consistency rules with READ COMMITTED or even READ UNCOMMITTED, in situations such as bulk reporting where precise consistency and repeatable results are less important than minimizing the amount of overhead for locking. SERIALIZABLE enforces even stricter rules than REPEATABLE READ, and is used mainly in specialized situations, such as with XA transactions and for troubleshooting issues with concurrency and deadlocks.

The following list describes how MySQL supports the different transaction levels. The list goes from the most commonly used level to the least used.

  • REPEATABLE READ

    This is the default isolation level for InnoDB. Consistent reads within the same transaction read the snapshot established by the first read. This means that if you issue several plain (nonlocking) SELECT statements within the same transaction, these SELECT statements are consistent also with respect to each other. See Section 15.5.2.3, “Consistent Nonlocking Reads”.

    For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE), UPDATE, and DELETE statements, locking depends on whether the statement uses a unique index with a unique search condition, or a range-type search condition.

    • For a unique index with a unique search condition, InnoDB locks only the index record found, not the gap before it.

    • For other search conditions, InnoDB locks the index range scanned, using gap locks or next-key locks to block insertions by other sessions into the gaps covered by the range. For information about gap locks and next-key locks, see Section 15.5.1, “InnoDB Locking”.

  • READ COMMITTED

    Each consistent read, even within the same transaction, sets and reads its own fresh snapshot. For information about consistent reads, see Section 15.5.2.3, “Consistent Nonlocking Reads”.

    For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE), UPDATE statements, and DELETE statements, InnoDB locks only index records, not the gaps before them, and thus permits the free insertion of new records next to locked records. Gap locking is only used for foreign-key constraint checking and duplicate-key checking.

    Because gap locking is disabled, phantom problems may occur, as other sessions can insert new rows into the gaps. For information about phantoms, see Section 15.5.4, “Phantom Rows”.

    If you use READ COMMITTED, you must use row-based binary logging.

    Using READ COMMITTED has additional effects:

    • For UPDATE or DELETE statements, InnoDB holds locks only for rows that it updates or deletes. Record locks for nonmatching rows are released after MySQL has evaluated the WHERE condition. This greatly reduces the probability of deadlocks, but they can still happen.

    • For UPDATE statements, if a row is already locked, InnoDB performs a semi-consistent read, returning the latest committed version to MySQL so that MySQL can determine whether the row matches the WHERE condition of the UPDATE. If the row matches (must be updated), MySQL reads the row again and this time InnoDB either locks it or waits for a lock on it.

    Consider the following example, beginning with this table:

    CREATE TABLE t (a INT NOT NULL, b INT) ENGINE = InnoDB;
    INSERT INTO t VALUES (1,2),(2,3),(3,2),(4,3),(5,2);
    COMMIT;
    

    In this case, table has no indexes, so searches and index scans use the hidden clustered index for record locking (see Section 15.8.9, “Clustered and Secondary Indexes”).

    Suppose that one client performs an UPDATE using these statements:

    SET autocommit = 0;
    UPDATE t SET b = 5 WHERE b = 3;
    

    Suppose also that a second client performs an UPDATE by executing these statements following those of the first client:

    SET autocommit = 0;
    UPDATE t SET b = 4 WHERE b = 2;
    

    As InnoDB executes each UPDATE, it first acquires an exclusive lock for each row, and then determines whether to modify it. If InnoDB does not modify the row, it releases the lock. Otherwise, InnoDB retains the lock until the end of the transaction. This affects transaction processing as follows.

    When using the default REPEATABLE READ isolation level, the first UPDATE acquires x-locks and does not release any of them:

    x-lock(1,2); retain x-lock
    x-lock(2,3); update(2,3) to (2,5); retain x-lock
    x-lock(3,2); retain x-lock
    x-lock(4,3); update(4,3) to (4,5); retain x-lock
    x-lock(5,2); retain x-lock
    

    The second UPDATE blocks as soon as it tries to acquire any locks (because first update has retained locks on all rows), and does not proceed until the first UPDATE commits or rolls back:

    x-lock(1,2); block and wait for first UPDATE to commit or roll back
    

    If READ COMMITTED is used instead, the first UPDATE acquires x-locks and releases those for rows that it does not modify:

    x-lock(1,2); unlock(1,2)
    x-lock(2,3); update(2,3) to (2,5); retain x-lock
    x-lock(3,2); unlock(3,2)
    x-lock(4,3); update(4,3) to (4,5); retain x-lock
    x-lock(5,2); unlock(5,2)
    

    For the second UPDATE, InnoDB does a semi-consistent read, returning the latest committed version of each row to MySQL so that MySQL can determine whether the row matches the WHERE condition of the UPDATE:

    x-lock(1,2); update(1,2) to (1,4); retain x-lock
    x-lock(2,3); unlock(2,3)
    x-lock(3,2); update(3,2) to (3,4); retain x-lock
    x-lock(4,3); unlock(4,3)
    x-lock(5,2); update(5,2) to (5,4); retain x-lock
    

    The effects of using the READ COMMITTED isolation level are the same as enabling the deprecated innodb_locks_unsafe_for_binlog configuration option, with these exceptions:

    • Enabling innodb_locks_unsafe_for_binlog is a global setting and affects all sessions, whereas the isolation level can be set globally for all sessions, or individually per session.

    • innodb_locks_unsafe_for_binlog can be set only at server startup, whereas the isolation level can be set at startup or changed at runtime.

    READ COMMITTED therefore offers finer and more flexible control than innodb_locks_unsafe_for_binlog.

  • READ UNCOMMITTED

    SELECT statements are performed in a nonlocking fashion, but a possible earlier version of a row might be used. Thus, using this isolation level, such reads are not consistent. This is also called a dirty read. Otherwise, this isolation level works like READ COMMITTED.

  • SERIALIZABLE

    This level is like REPEATABLE READ, but InnoDB implicitly converts all plain SELECT statements to SELECT ... LOCK IN SHARE MODE if autocommit is disabled. If autocommit is enabled, the SELECT is its own transaction. It therefore is known to be read only and can be serialized if performed as a consistent (nonlocking) read and need not block for other transactions. (To force a plain SELECT to block if other transactions have modified the selected rows, disable autocommit.)

15.5.2.2 autocommit, Commit, and Rollback

In InnoDB, all user activity occurs inside a transaction. If autocommit mode is enabled, each SQL statement forms a single transaction on its own. By default, MySQL starts the session for each new connection with autocommit enabled, so MySQL does a commit after each SQL statement if that statement did not return an error. If a statement returns an error, the commit or rollback behavior depends on the error. See Section 15.21.4, “InnoDB Error Handling”.

A session that has autocommit enabled can perform a multiple-statement transaction by starting it with an explicit START TRANSACTION or BEGIN statement and ending it with a COMMIT or ROLLBACK statement. See Section 14.3.1, “START TRANSACTION, COMMIT, and ROLLBACK Syntax”.

If autocommit mode is disabled within a session with SET autocommit = 0, the session always has a transaction open. A COMMIT or ROLLBACK statement ends the current transaction and a new one starts.

If a session that has autocommit disabled ends without explicitly committing the final transaction, MySQL rolls back that transaction.

Some statements implicitly end a transaction, as if you had done a COMMIT before executing the statement. For details, see Section 14.3.3, “Statements That Cause an Implicit Commit”.

A COMMIT means that the changes made in the current transaction are made permanent and become visible to other sessions. A ROLLBACK statement, on the other hand, cancels all modifications made by the current transaction. Both COMMIT and ROLLBACK release all InnoDB locks that were set during the current transaction.

Grouping DML Operations with Transactions

By default, connection to the MySQL server begins with autocommit mode enabled, which automatically commits every SQL statement as you execute it. This mode of operation might be unfamiliar if you have experience with other database systems, where it is standard practice to issue a sequence of DML statements and commit them or roll them back all together.

To use multiple-statement transactions, switch autocommit off with the SQL statement SET autocommit = 0 and end each transaction with COMMIT or ROLLBACK as appropriate. To leave autocommit on, begin each transaction with START TRANSACTION and end it with COMMIT or ROLLBACK. The following example shows two transactions. The first is committed; the second is rolled back.

shell> mysql test

mysql> CREATE TABLE customer (a INT, b CHAR (20), INDEX (a));
Query OK, 0 rows affected (0.00 sec)
mysql> -- Do a transaction with autocommit turned on.
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO customer VALUES (10, 'Heikki');
Query OK, 1 row affected (0.00 sec)
mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)
mysql> -- Do another transaction with autocommit turned off.
mysql> SET autocommit=0;
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO customer VALUES (15, 'John');
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO customer VALUES (20, 'Paul');
Query OK, 1 row affected (0.00 sec)
mysql> DELETE FROM customer WHERE b = 'Heikki';
Query OK, 1 row affected (0.00 sec)
mysql> -- Now we undo those last 2 inserts and the delete.
mysql> ROLLBACK;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT * FROM customer;
+------+--------+
| a    | b      |
+------+--------+
|   10 | Heikki |
+------+--------+
1 row in set (0.00 sec)
mysql>
Transactions in Client-Side Languages

In APIs such as PHP, Perl DBI, JDBC, ODBC, or the standard C call interface of MySQL, you can send transaction control statements such as COMMIT to the MySQL server as strings just like any other SQL statements such as SELECT or INSERT. Some APIs also offer separate special transaction commit and rollback functions or methods.

15.5.2.3 Consistent Nonlocking Reads

A consistent read means that InnoDB uses multi-versioning to present to a query a snapshot of the database at a point in time. The query sees the changes made by transactions that committed before that point of time, and no changes made by later or uncommitted transactions. The exception to this rule is that the query sees the changes made by earlier statements within the same transaction. This exception causes the following anomaly: If you update some rows in a table, a SELECT sees the latest version of the updated rows, but it might also see older versions of any rows. If other sessions simultaneously update the same table, the anomaly means that you might see the table in a state that never existed in the database.

If the transaction isolation level is REPEATABLE READ (the default level), all consistent reads within the same transaction read the snapshot established by the first such read in that transaction. You can get a fresher snapshot for your queries by committing the current transaction and after that issuing new queries.

With READ COMMITTED isolation level, each consistent read within a transaction sets and reads its own fresh snapshot.

Consistent read is the default mode in which InnoDB processes SELECT statements in READ COMMITTED and REPEATABLE READ isolation levels. A consistent read does not set any locks on the tables it accesses, and therefore other sessions are free to modify those tables at the same time a consistent read is being performed on the table.

Suppose that you are running in the default REPEATABLE READ isolation level. When you issue a consistent read (that is, an ordinary SELECT statement), InnoDB gives your transaction a timepoint according to which your query sees the database. If another transaction deletes a row and commits after your timepoint was assigned, you do not see the row as having been deleted. Inserts and updates are treated similarly.

Note

The snapshot of the database state applies to SELECT statements within a transaction, not necessarily to DML statements. If you insert or modify some rows and then commit that transaction, a DELETE or UPDATE statement issued from another concurrent REPEATABLE READ transaction could affect those just-committed rows, even though the session could not query them. If a transaction does update or delete rows committed by a different transaction, those changes do become visible to the current transaction. For example, you might encounter a situation like the following:

SELECT COUNT(c1) FROM t1 WHERE c1 = 'xyz';
-- Returns 0: no rows match.
DELETE FROM t1 WHERE c1 = 'xyz';
-- Deletes several rows recently committed by other transaction.

SELECT COUNT(c2) FROM t1 WHERE c2 = 'abc';
-- Returns 0: no rows match.
UPDATE t1 SET c2 = 'cba' WHERE c2 = 'abc';
-- Affects 10 rows: another txn just committed 10 rows with 'abc' values.
SELECT COUNT(c2) FROM t1 WHERE c2 = 'cba';
-- Returns 10: this txn can now see the rows it just updated.

You can advance your timepoint by committing your transaction and then doing another SELECT or START TRANSACTION WITH CONSISTENT SNAPSHOT.

This is called multi-versioned concurrency control.

In the following example, session A sees the row inserted by B only when B has committed the insert and A has committed as well, so that the timepoint is advanced past the commit of B.

             Session A              Session B

           SET autocommit=0;      SET autocommit=0;
time
|          SELECT * FROM t;
|          empty set
|                                 INSERT INTO t VALUES (1, 2);
|
v          SELECT * FROM t;
           empty set
                                  COMMIT;

           SELECT * FROM t;
           empty set

           COMMIT;

           SELECT * FROM t;
           ---------------------
           |    1    |    2    |
           ---------------------

If you want to see the freshest state of the database, use either the READ COMMITTED isolation level or a locking read:

SELECT * FROM t LOCK IN SHARE MODE;

With READ COMMITTED isolation level, each consistent read within a transaction sets and reads its own fresh snapshot. With LOCK IN SHARE MODE, a locking read occurs instead: A SELECT blocks until the transaction containing the freshest rows ends (see Section 15.5.2.4, “Locking Reads”).

Consistent read does not work over certain DDL statements:

  • Consistent read does not work over DROP TABLE, because MySQL cannot use a table that has been dropped and InnoDB destroys the table.

  • Consistent read does not work over ALTER TABLE, because that statement makes a temporary copy of the original table and deletes the original table when the temporary copy is built. When you reissue a consistent read within a transaction, rows in the new table are not visible because those rows did not exist when the transaction's snapshot was taken. In this case, the transaction returns an error: ER_TABLE_DEF_CHANGED, Table definition has changed, please retry transaction.

The type of read varies for selects in clauses like INSERT INTO ... SELECT, UPDATE ... (SELECT), and CREATE TABLE ... SELECT that do not specify FOR UPDATE or LOCK IN SHARE MODE:

15.5.2.4 Locking Reads

If you query data and then insert or update related data within the same transaction, the regular SELECT statement does not give enough protection. Other transactions can update or delete the same rows you just queried. InnoDB supports two types of locking reads that offer extra safety:

  • SELECT ... LOCK IN SHARE MODE

    Sets a shared mode lock on any rows that are read. Other sessions can read the rows, but cannot modify them until your transaction commits. If any of these rows were changed by another transaction that has not yet committed, your query waits until that transaction ends and then uses the latest values.

  • SELECT ... FOR UPDATE

    For index records the search encounters, locks the rows and any associated index entries, the same as if you issued an UPDATE statement for those rows. Other transactions are blocked from updating those rows, from doing SELECT ... LOCK IN SHARE MODE, or from reading the data in certain transaction isolation levels. Consistent reads ignore any locks set on the records that exist in the read view. (Old versions of a record cannot be locked; they are reconstructed by applying undo logs on an in-memory copy of the record.)

These clauses are primarily useful when dealing with tree-structured or graph-structured data, either in a single table or split across multiple tables. You traverse edges or tree branches from one place to another, while reserving the right to come back and change any of these pointer values.

All locks set by LOCK IN SHARE MODE and FOR UPDATE queries are released when the transaction is committed or rolled back.

Note

Locking of rows for update using SELECT FOR UPDATE only applies when autocommit is disabled (either by beginning transaction with START TRANSACTION or by setting autocommit to 0. If autocommit is enabled, the rows matching the specification are not locked.

Locking Read Examples

Suppose that you want to insert a new row into a table child, and make sure that the child row has a parent row in table parent. Your application code can ensure referential integrity throughout this sequence of operations.

First, use a consistent read to query the table PARENT and verify that the parent row exists. Can you safely insert the child row to table CHILD? No, because some other session could delete the parent row in the moment between your SELECT and your INSERT, without you being aware of it.

To avoid this potential issue, perform the SELECT using LOCK IN SHARE MODE:

SELECT * FROM parent WHERE NAME = 'Jones' LOCK IN SHARE MODE;

After the LOCK IN SHARE MODE query returns the parent 'Jones', you can safely add the child record to the CHILD table and commit the transaction. Any transaction that tries to acquire an exclusive lock in the applicable row in the PARENT table waits until you are finished, that is, until the data in all tables is in a consistent state.

For another example, consider an integer counter field in a table CHILD_CODES, used to assign a unique identifier to each child added to table CHILD. Do not use either consistent read or a shared mode read to read the present value of the counter, because two users of the database could see the same value for the counter, and a duplicate-key error occurs if two transactions attempt to add rows with the same identifier to the CHILD table.

Here, LOCK IN SHARE MODE is not a good solution because if two users read the counter at the same time, at least one of them ends up in deadlock when it attempts to update the counter.

To implement reading and incrementing the counter, first perform a locking read of the counter using FOR UPDATE, and then increment the counter. For example:

SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;

A SELECT ... FOR UPDATE reads the latest available data, setting exclusive locks on each row it reads. Thus, it sets the same locks a searched SQL UPDATE would set on the rows.

The preceding description is merely an example of how SELECT ... FOR UPDATE works. In MySQL, the specific task of generating a unique identifier actually can be accomplished using only a single access to the table:

UPDATE child_codes SET counter_field = LAST_INSERT_ID(counter_field + 1);
SELECT LAST_INSERT_ID();

The SELECT statement merely retrieves the identifier information (specific to the current connection). It does not access any table.

15.5.3 Locks Set by Different SQL Statements in InnoDB

A locking read, an UPDATE, or a DELETE generally set record locks on every index record that is scanned in the processing of the SQL statement. It does not matter whether there are WHERE conditions in the statement that would exclude the row. InnoDB does not remember the exact WHERE condition, but only knows which index ranges were scanned. The locks are normally next-key locks that also block inserts into the gap immediately before the record. However, gap locking can be disabled explicitly, which causes next-key locking not to be used. For more information, see Section 15.5.1, “InnoDB Locking”. The transaction isolation level also can affect which locks are set; see Section 15.5.2.1, “Transaction Isolation Levels”.

If a secondary index is used in a search and index record locks to be set are exclusive, InnoDB also retrieves the corresponding clustered index records and sets locks on them.

Differences between shared and exclusive locks are described in Section 15.5.1, “InnoDB Locking”.

If you have no indexes suitable for your statement and MySQL must scan the entire table to process the statement, every row of the table becomes locked, which in turn blocks all inserts by other users to the table. It is important to create good indexes so that your queries do not unnecessarily scan many rows.

For SELECT ... FOR UPDATE or SELECT ... LOCK IN SHARE MODE, locks are acquired for scanned rows, and expected to be released for rows that do not qualify for inclusion in the result set (for example, if they do not meet the criteria given in the WHERE clause). However, in some cases, rows might not be unlocked immediately because the relationship between a result row and its original source is lost during query execution. For example, in a UNION, scanned (and locked) rows from a table might be inserted into a temporary table before evaluation whether they qualify for the result set. In this circumstance, the relationship of the rows in the temporary table to the rows in the original table is lost and the latter rows are not unlocked until the end of query execution.

InnoDB sets specific types of locks as follows.

  • SELECT ... FROM is a consistent read, reading a snapshot of the database and setting no locks unless the transaction isolation level is set to SERIALIZABLE. For SERIALIZABLE level, the search sets shared next-key locks on the index records it encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.

  • SELECT ... FROM ... LOCK IN SHARE MODE sets shared next-key locks on all index records the search encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.

  • SELECT ... FROM ... FOR UPDATE sets an exclusive next-key lock on every record the search encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.

    For index records the search encounters, SELECT ... FROM ... FOR UPDATE blocks other sessions from doing SELECT ... FROM ... LOCK IN SHARE MODE or from reading in certain transaction isolation levels. Consistent reads ignore any locks set on the records that exist in the read view.

  • UPDATE ... WHERE ... sets an exclusive next-key lock on every record the search encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.

  • When UPDATE modifies a clustered index record, implicit locks are taken on affected secondary index records. The UPDATE operation also takes shared locks on affected secondary index records when performing duplicate check scans prior to inserting new secondary index records, and when inserting new secondary index records.

  • DELETE FROM ... WHERE ... sets an exclusive next-key lock on every record the search encounters. However, only an index record lock is required for statements that lock rows using a unique index to search for a unique row.

  • INSERT sets an exclusive lock on the inserted row. This lock is an index-record lock, not a next-key lock (that is, there is no gap lock) and does not prevent other sessions from inserting into the gap before the inserted row.

    Prior to inserting the row, a type of gap lock called an insert intention gap lock is set. This lock signals the intent to insert in such a way that multiple transactions inserting into the same index gap need not wait for each other if they are not inserting at the same position within the gap. Suppose that there are index records with values of 4 and 7. Separate transactions that attempt to insert values of 5 and 6 each lock the gap between 4 and 7 with insert intention locks prior to obtaining the exclusive lock on the inserted row, but do not block each other because the rows are nonconflicting.

    If a duplicate-key error occurs, a shared lock on the duplicate index record is set. This use of a shared lock can result in deadlock should there be multiple sessions trying to insert the same row if another session already has an exclusive lock. This can occur if another session deletes the row. Suppose that an InnoDB table t1 has the following structure:

    CREATE TABLE t1 (i INT, PRIMARY KEY (i)) ENGINE = InnoDB;
    

    Now suppose that three sessions perform the following operations in order:

    Session 1:

    START TRANSACTION;
    INSERT INTO t1 VALUES(1);
    

    Session 2:

    START TRANSACTION;
    INSERT INTO t1 VALUES(1);
    

    Session 3:

    START TRANSACTION;
    INSERT INTO t1 VALUES(1);
    

    Session 1:

    ROLLBACK;
    

    The first operation by session 1 acquires an exclusive lock for the row. The operations by sessions 2 and 3 both result in a duplicate-key error and they both request a shared lock for the row. When session 1 rolls back, it releases its exclusive lock on the row and the queued shared lock requests for sessions 2 and 3 are granted. At this point, sessions 2 and 3 deadlock: Neither can acquire an exclusive lock for the row because of the shared lock held by the other.

    A similar situation occurs if the table already contains a row with key value 1 and three sessions perform the following operations in order:

    Session 1:

    START TRANSACTION;
    DELETE FROM t1 WHERE i = 1;
    

    Session 2:

    START TRANSACTION;
    INSERT INTO t1 VALUES(1);
    

    Session 3:

    START TRANSACTION;
    INSERT INTO t1 VALUES(1);
    

    Session 1:

    COMMIT;
    

    The first operation by session 1 acquires an exclusive lock for the row. The operations by sessions 2 and 3 both result in a duplicate-key error and they both request a shared lock for the row. When session 1 commits, it releases its exclusive lock on the row and the queued shared lock requests for sessions 2 and 3 are granted. At this point, sessions 2 and 3 deadlock: Neither can acquire an exclusive lock for the row because of the shared lock held by the other.

  • INSERT ... ON DUPLICATE KEY UPDATE differs from a simple INSERT in that an exclusive next-key lock rather than a shared lock is placed on the row to be updated when a duplicate-key error occurs.

  • REPLACE is done like an INSERT if there is no collision on a unique key. Otherwise, an exclusive next-key lock is placed on the row to be replaced.

  • INSERT INTO T SELECT ... FROM S WHERE ... sets an exclusive index record lock (without a gap lock) on each row inserted into T. If the transaction isolation level is READ COMMITTED, or innodb_locks_unsafe_for_binlog is enabled and the transaction isolation level is not SERIALIZABLE, InnoDB does the search on S as a consistent read (no locks). Otherwise, InnoDB sets shared next-key locks on rows from S. InnoDB has to set locks in the latter case: In roll-forward recovery from a backup, every SQL statement must be executed in exactly the same way it was done originally.

    CREATE TABLE ... SELECT ... performs the SELECT with shared next-key locks or as a consistent read, as for INSERT ... SELECT.

    When a SELECT is used in the constructs REPLACE INTO t SELECT ... FROM s WHERE ... or UPDATE t ... WHERE col IN (SELECT ... FROM s ...), InnoDB sets shared next-key locks on rows from table s.

  • While initializing a previously specified AUTO_INCREMENT column on a table, InnoDB sets an exclusive lock on the end of the index associated with the AUTO_INCREMENT column. In accessing the auto-increment counter, InnoDB uses a specific AUTO-INC table lock mode where the lock lasts only to the end of the current SQL statement, not to the end of the entire transaction. Other sessions cannot insert into the table while the AUTO-INC table lock is held; see Section 15.5.2, “InnoDB Transaction Model”.

    InnoDB fetches the value of a previously initialized AUTO_INCREMENT column without setting any locks.

  • If a FOREIGN KEY constraint is defined on a table, any insert, update, or delete that requires the constraint condition to be checked sets shared record-level locks on the records that it looks at to check the constraint. InnoDB also sets these locks in the case where the constraint fails.

  • LOCK TABLES sets table locks, but it is the higher MySQL layer above the InnoDB layer that sets these locks. InnoDB is aware of table locks if innodb_table_locks = 1 (the default) and autocommit = 0, and the MySQL layer above InnoDB knows about row-level locks.

    Otherwise, InnoDB's automatic deadlock detection cannot detect deadlocks where such table locks are involved. Also, because in this case the higher MySQL layer does not know about row-level locks, it is possible to get a table lock on a table where another session currently has row-level locks. However, this does not endanger transaction integrity, as discussed in Section 15.5.5.2, “Deadlock Detection and Rollback”. See also Section 15.8.8, “Limits on InnoDB Tables”.

15.5.4 Phantom Rows

The so-called phantom problem occurs within a transaction when the same query produces different sets of rows at different times. For example, if a SELECT is executed twice, but returns a row the second time that was not returned the first time, the row is a phantom row.

Suppose that there is an index on the id column of the child table and that you want to read and lock all rows from the table having an identifier value larger than 100, with the intention of updating some column in the selected rows later:

SELECT * FROM child WHERE id > 100 FOR UPDATE;

The query scans the index starting from the first record where id is bigger than 100. Let the table contain rows having id values of 90 and 102. If the locks set on the index records in the scanned range do not lock out inserts made in the gaps (in this case, the gap between 90 and 102), another session can insert a new row into the table with an id of 101. If you were to execute the same SELECT within the same transaction, you would see a new row with an id of 101 (a phantom) in the result set returned by the query. If we regard a set of rows as a data item, the new phantom child would violate the isolation principle of transactions that a transaction should be able to run so that the data it has read does not change during the transaction.

To prevent phantoms, InnoDB uses an algorithm called next-key locking that combines index-row locking with gap locking. InnoDB performs row-level locking in such a way that when it searches or scans a table index, it sets shared or exclusive locks on the index records it encounters. Thus, the row-level locks are actually index-record locks. In addition, a next-key lock on an index record also affects the gap before that index record. That is, a next-key lock is an index-record lock plus a gap lock on the gap preceding the index record. If one session has a shared or exclusive lock on record R in an index, another session cannot insert a new index record in the gap immediately before R in the index order.

When InnoDB scans an index, it can also lock the gap after the last record in the index. Just that happens in the preceding example: To prevent any insert into the table where id would be bigger than 100, the locks set by InnoDB include a lock on the gap following id value 102.

You can use next-key locking to implement a uniqueness check in your application: If you read your data in share mode and do not see a duplicate for a row you are going to insert, then you can safely insert your row and know that the next-key lock set on the successor of your row during the read prevents anyone meanwhile inserting a duplicate for your row. Thus, the next-key locking enables you to lock the nonexistence of something in your table.

Gap locking can be disabled as discussed in Section 15.5.1, “InnoDB Locking”. This may cause phantom problems because other sessions can insert new rows into the gaps when gap locking is disabled.

15.5.5 Deadlocks in InnoDB

A deadlock is a situation where different transactions are unable to proceed because each holds a lock that the other needs. Because both transactions are waiting for a resource to become available, neither ever release the locks it holds.

A deadlock can occur when transactions lock rows in multiple tables (through statements such as UPDATE or SELECT ... FOR UPDATE), but in the opposite order. A deadlock can also occur when such statements lock ranges of index records and gaps, with each transaction acquiring some locks but not others due to a timing issue. For a deadlock example, see Section 15.5.5.1, “An InnoDB Deadlock Example”.

To reduce the possibility of deadlocks, use transactions rather than LOCK TABLES statements; keep transactions that insert or update data small enough that they do not stay open for long periods of time; when different transactions update multiple tables or large ranges of rows, use the same order of operations (such as SELECT ... FOR UPDATE) in each transaction; create indexes on the columns used in SELECT ... FOR UPDATE and UPDATE ... WHERE statements. The possibility of deadlocks is not affected by the isolation level, because the isolation level changes the behavior of read operations, while deadlocks occur because of write operations. For more information about avoiding and recovering from deadlock conditions, see Section 15.5.5.3, “How to Minimize and Handle Deadlocks”.

When deadlock detection is enabled (the default) and a deadlock does occur, InnoDB detects the condition and rolls back one of the transactions (the victim). If deadlock detection is disabled using the innodb_deadlock_detect configuration option, InnoDB relies on the innodb_lock_wait_timeout setting to roll back transactions in case of a deadlock. Thus, even if your application logic is correct, you must still handle the case where a transaction must be retried. To see the last deadlock in an InnoDB user transaction, use the SHOW ENGINE INNODB STATUS command. If frequent deadlocks highlight a problem with transaction structure or application error handling, run with the innodb_print_all_deadlocks setting enabled to print information about all deadlocks to the mysqld error log. For more information about how deadlocks are automatically detected and handled, see Section 15.5.5.2, “Deadlock Detection and Rollback”.

15.5.5.1 An InnoDB Deadlock Example

The following example illustrates how an error can occur when a lock request would cause a deadlock. The example involves two clients, A and B.

First, client A creates a table containing one row, and then begins a transaction. Within the transaction, A obtains an S lock on the row by selecting it in share mode:

mysql> CREATE TABLE t (i INT) ENGINE = InnoDB;
Query OK, 0 rows affected (1.07 sec)

mysql> INSERT INTO t (i) VALUES(1);
Query OK, 1 row affected (0.09 sec)

mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT * FROM t WHERE i = 1 LOCK IN SHARE MODE;
+------+
| i    |
+------+
|    1 |
+------+

Next, client B begins a transaction and attempts to delete the row from the table:

mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)

mysql> DELETE FROM t WHERE i = 1;

The delete operation requires an X lock. The lock cannot be granted because it is incompatible with the S lock that client A holds, so the request goes on the queue of lock requests for the row and client B blocks.

Finally, client A also attempts to delete the row from the table:

mysql> DELETE FROM t WHERE i = 1;
ERROR 1213 (40001): Deadlock found when trying to get lock;
try restarting transaction

Deadlock occurs here because client A needs an X lock to delete the row. However, that lock request cannot be granted because client B already has a request for an X lock and is waiting for client A to release its S lock. Nor can the S lock held by A be upgraded to an X lock because of the prior request by B for an X lock. As a result, InnoDB generates an error for one of the clients and releases its locks. The client returns this error:

ERROR 1213 (40001): Deadlock found when trying to get lock;
try restarting transaction

At that point, the lock request for the other client can be granted and it deletes the row from the table.

15.5.5.2 Deadlock Detection and Rollback

When deadlock detection is enabled (the default), InnoDB automatically detects transaction deadlocks and rolls back a transaction or transactions to break the deadlock. InnoDB tries to pick small transactions to roll back, where the size of a transaction is determined by the number of rows inserted, updated, or deleted.

InnoDB is aware of table locks if innodb_table_locks = 1 (the default) and autocommit = 0, and the MySQL layer above it knows about row-level locks. Otherwise, InnoDB cannot detect deadlocks where a table lock set by a MySQL LOCK TABLES statement or a lock set by a storage engine other than InnoDB is involved. Resolve these situations by setting the value of the innodb_lock_wait_timeout system variable.

When InnoDB performs a complete rollback of a transaction, all locks set by the transaction are released. However, if just a single SQL statement is rolled back as a result of an error, some of the locks set by the statement may be preserved. This happens because InnoDB stores row locks in a format such that it cannot know afterward which lock was set by which statement.

If a SELECT calls a stored function in a transaction, and a statement within the function fails, that statement rolls back. Furthermore, if ROLLBACK is executed after that, the entire transaction rolls back.

If the LATEST DETECTED DEADLOCK section of InnoDB Monitor output includes a message stating, TOO DEEP OR LONG SEARCH IN THE LOCK TABLE WAITS-FOR GRAPH, WE WILL ROLL BACK FOLLOWING TRANSACTION, this indicates that the number of transactions on the wait-for list has reached a limit of 200. A wait-for list that exceeds 200 transactions is treated as a deadlock and the transaction attempting to check the wait-for list is rolled back. The same error may also occur if the locking thread must look at more than 1,000,000 locks owned by transactions on the wait-for list.

For techniques to organize database operations to avoid deadlocks, see Section 15.5.5, “Deadlocks in InnoDB”.

Disabling Deadlock Detection

On high concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock. At times, it may be more efficient to disable deadlock detection and rely on the innodb_lock_wait_timeout setting for transaction rollback when a deadlock occurs. Deadlock detection can be disabled using the innodb_deadlock_detect configuration option.

15.5.5.3 How to Minimize and Handle Deadlocks

This section builds on the conceptual information about deadlocks in Section 15.5.5.2, “Deadlock Detection and Rollback”. It explains how to organize database operations to minimize deadlocks and the subsequent error handling required in applications.

Deadlocks are a classic problem in transactional databases, but they are not dangerous unless they are so frequent that you cannot run certain transactions at all. Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.

InnoDB uses automatic row-level locking. You can get deadlocks even in the case of transactions that just insert or delete a single row. That is because these operations are not really atomic; they automatically set locks on the (possibly several) index records of the row inserted or deleted.

You can cope with deadlocks and reduce the likelihood of their occurrence with the following techniques:

  • At any time, issue the SHOW ENGINE INNODB STATUS command to determine the cause of the most recent deadlock. That can help you to tune your application to avoid deadlocks.

  • If frequent deadlock warnings cause concern, collect more extensive debugging information by enabling the innodb_print_all_deadlocks configuration option. Information about each deadlock, not just the latest one, is recorded in the MySQL error log. Disable this option when you are finished debugging.

  • Always be prepared to re-issue a transaction if it fails due to deadlock. Deadlocks are not dangerous. Just try again.

  • Keep transactions small and short in duration to make them less prone to collision.

  • Commit transactions immediately after making a set of related changes to make them less prone to collision. In particular, do not leave an interactive mysql session open for a long time with an uncommitted transaction.

  • If you use locking reads (SELECT ... FOR UPDATE or SELECT ... LOCK IN SHARE MODE), try using a lower isolation level such as READ COMMITTED.

  • When modifying multiple tables within a transaction, or different sets of rows in the same table, do those operations in a consistent order each time. Then transactions form well-defined queues and do not deadlock. For example, organize database operations into functions within your application, or call stored routines, rather than coding multiple similar sequences of INSERT, UPDATE, and DELETE statements in different places.

  • Add well-chosen indexes to your tables. Then your queries need to scan fewer index records and consequently set fewer locks. Use EXPLAIN SELECT to determine which indexes the MySQL server regards as the most appropriate for your queries.

  • Use less locking. If you can afford to permit a SELECT to return data from an old snapshot, do not add the clause FOR UPDATE or LOCK IN SHARE MODE to it. Using the READ COMMITTED isolation level is good here, because each consistent read within the same transaction reads from its own fresh snapshot.

  • If nothing else helps, serialize your transactions with table-level locks. The correct way to use LOCK TABLES with transactional tables, such as InnoDB tables, is to begin a transaction with SET autocommit = 0 (not START TRANSACTION) followed by LOCK TABLES, and to not call UNLOCK TABLES until you commit the transaction explicitly. For example, if you need to write to table t1 and read from table t2, you can do this:

    SET autocommit=0;
    LOCK TABLES t1 WRITE, t2 READ, ...;
    ... do something with tables t1 and t2 here ...
    COMMIT;
    UNLOCK TABLES;
    

    Table-level locks prevent concurrent updates to the table, avoiding deadlocks at the expense of less responsiveness for a busy system.

  • Another way to serialize transactions is to create an auxiliary semaphore table that contains just a single row. Have each transaction update that row before accessing other tables. In that way, all transactions happen in a serial fashion. Note that the InnoDB instant deadlock detection algorithm also works in this case, because the serializing lock is a row-level lock. With MySQL table-level locks, the timeout method must be used to resolve deadlocks.

15.6 InnoDB Configuration

This section provides configuration information and procedures for InnoDB initialization, startup, and various components and features of the InnoDB storage engine. For information about optimizing database operations for InnoDB tables, see Section 9.5, “Optimizing for InnoDB Tables”.

15.6.1 InnoDB Startup Configuration

The first decisions to make about InnoDB configuration involve the configuration of data files, log files, page size, and memory buffers. It is recommended that you define data file, log file, and page size configuration before creating the InnoDB instance. Modifying data file or log file configuration after the InnoDB instance is created may involve a non-trivial procedure, and page size can only be defined when the InnoDB instance is first initialized.

In addition to these topics, this section provides information about specifying InnoDB options in a configuration file, viewing InnoDB initialization information, and important storage considerations.

Specifying Options in a MySQL Configuration File

Because MySQL uses data file, log file, and page size configuration settings to initialize the InnoDB instance, it is recommended that you define these settings in a configuration file that MySQL reads at startup, prior to initializing InnoDB for the first time. InnoDB is initialized when the MySQL server is started, and the first initialization of InnoDB normally occurs the first time you start the MySQL server.

You can place InnoDB options in the [mysqld] group of any option file that your server reads when it starts. The locations of MySQL option files are described in Section 5.2.6, “Using Option Files”.

To make sure that mysqld reads options only from a specific file, use the --defaults-file option as the first option on the command line when starting the server:

mysqld --defaults-file=path_to_configuration_file

Viewing InnoDB Initialization Information

To view InnoDB initialization information during startup, start mysqld from a command prompt. When mysqld is started from a command prompt, initialization information is printed to the console.

For example, on Windows, if mysqld is located in C:\Program Files\MySQL\MySQL Server 5.7\bin, start the MySQL server like this:

C:\> "C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqld" --console

On Unix-like systems, mysqld is located in the bin directory of your MySQL installation:

sell> bin/mysqld --user=mysql &

If you do not send server output to the console, check the error log after startup to see the initialization information InnoDB printed during the startup process.

For information about starting MySQL using other methods, see Section 2.10.5, “Starting and Stopping MySQL Automatically”.

Important Storage Considerations

Review the following storage-related considerations before proceeding with your startup configuration.

  • In some cases, database performance improves if the data is not all placed on the same physical disk. Putting log files on a different disk from data is very often beneficial for performance. For example, you can place system tablespace data files and log files on different disks. You can also use raw disk partitions (raw devices) for InnoDB data files, which may speed up I/O. See Section 15.7.3, “Using Raw Disk Partitions for the System Tablespace”.

  • InnoDB is a transaction-safe (ACID compliant) storage engine for MySQL that has commit, rollback, and crash-recovery capabilities to protect user data. However, it cannot do so if the underlying operating system or hardware does not work as advertised. Many operating systems or disk subsystems may delay or reorder write operations to improve performance. On some operating systems, the very fsync() system call that should wait until all unwritten data for a file has been flushed might actually return before the data has been flushed to stable storage. Because of this, an operating system crash or a power outage may destroy recently committed data, or in the worst case, even corrupt the database because of write operations having been reordered. If data integrity is important to you, perform some pull-the-plug tests before using anything in production. On OS X 10.3 and higher, InnoDB uses a special fcntl() file flush method. Under Linux, it is advisable to disable the write-back cache.

    On ATA/SATA disk drives, a command such hdparm -W0 /dev/hda may work to disable the write-back cache. Beware that some drives or disk controllers may be unable to disable the write-back cache.

  • With regard to InnoDB recovery capabilities that protect user data, InnoDB uses a file flush technique involving a structure called the doublewrite buffer, which is enabled by default (innodb_doublewrite=ON). The doublewrite buffer adds safety to recovery following a crash or power outage, and improves performance on most varieties of Unix by reducing the need for fsync() operations. It is recommended that the innodb_doublewrite option remains enabled if you are concerned with data integrity or possible failures. For additional information about the doublewrite buffer, see Section 15.12.1, “InnoDB Disk I/O”.

  • Before using NFS with InnoDB, review potential issues outlined in Using NFS with MySQL.

System Tablespace Data File Configuration

System tablespace data files are configured using the innodb_data_file_path and innodb_data_home_dir configuration options.

The innodb_data_file_path configuration option is used to configure the InnoDB system tablespace data files. The value of innodb_data_file_path should be a list of one or more data file specifications. If you name more than one data file, separate them by semicolon (;) characters:

innodb_data_file_path=datafile_spec1[;datafile_spec2]...

For example, the following setting explicitly creates a minimally sized system tablespace:

[mysqld]
innodb_data_file_path=ibdata1:12M:autoextend

This setting configures a single 12MB data file named ibdata1 that is auto-extending. No location for the file is given, so by default, InnoDB creates it in the MySQL data directory.

Sizes are specified using K, M, or G suffix letters to indicate units of KB, MB, or GB.

A tablespace containing a fixed-size 50MB data file named ibdata1 and a 50MB auto-extending file named ibdata2 in the data directory can be configured like this:

[mysqld]
innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend

The full syntax for a data file specification includes the file name, its size, and several optional attributes:

file_name:file_size[:autoextend[:max:max_file_size]]

The autoextend and max attributes can be used only for the last data file in the innodb_data_file_path line.

If you specify the autoextend option for the last data file, InnoDB extends the data file if it runs out of free space in the tablespace. The increment is 64MB at a time by default. To modify the increment, change the innodb_autoextend_increment system variable.

If the disk becomes full, you might want to add another data file on another disk. For tablespace reconfiguration instructions, see Section 15.7.1, “Resizing the InnoDB System Tablespace”.

InnoDB is not aware of the file system maximum file size, so be cautious on file systems where the maximum file size is a small value such as 2GB. To specify a maximum size for an auto-extending data file, use the max attribute following the autoextend attribute. Use the max attribute only in cases where constraining disk usage is of critical importance, because exceeding the maximum size causes a fatal error, possibly including a crash. The following configuration permits ibdata1 to grow up to a limit of 500MB:

[mysqld]
innodb_data_file_path=ibdata1:12M:autoextend:max:500M

InnoDB creates tablespace files in the MySQL data directory by default (datadir). To specify a location explicitly, use the innodb_data_home_dir option. For example, to create two files named ibdata1 and ibdata2 in a directory named myibdata, configure InnoDB like this:

[mysqld]
innodb_data_home_dir = /path/to/myibdata/
innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend
Note

A trailing slash is required when specifying a value for innodb_data_home_dir.

InnoDB does not create directories, so make sure that the myibdata directory exists before you start the server. Use the Unix or DOS mkdir command to create any necessary directories.

Make sure that the MySQL server has the proper access rights to create files in the data directory. More generally, the server must have access rights in any directory where it needs to create data files.

InnoDB forms the directory path for each data file by textually concatenating the value of innodb_data_home_dir to the data file name. If the innodb_data_home_dir option is not specified in my.cnf at all, the default value is the dot directory ./, which means the MySQL data directory. (The MySQL server changes its current working directory to its data directory when it begins executing.)

If you specify innodb_data_home_dir as an empty string, you can specify absolute paths for the data files listed in the innodb_data_file_path value. The following example is equivalent to the preceding one:

[mysqld]
innodb_data_home_dir =
innodb_data_file_path=/path/to/myibdata/ibdata1:50M;/path/to/myibdata/ibdata2:50M:autoextend

InnoDB Log File Configuration

By default, InnoDB creates two 48MB log files in the MySQL data directory (datadir) named ib_logfile0 and ib_logfile1.

The following options can be used to modify the default configuration:

  • innodb_log_group_home_dir defines directory path to the InnoDB log files (the redo logs). If this option is not configured, InnoDB log files are created in the MySQL data directory (datadir).

    You might use this option to place InnoDB log files in a different physical storage location than InnoDB data files to avoid potential I/O resource conflicts. For example:

    [mysqld]
    innodb_log_group_home_dir = /dr3/iblogs
    
    Note

    InnoDB does not create directories, so make sure that the log directory exists before you start the server. Use the Unix or DOS mkdir command to create any necessary directories.

    Make sure that the MySQL server has the proper access rights to create files in the log directory. More generally, the server must have access rights in any directory where it needs to create log files.

  • innodb_log_files_in_group defines the number of log files in the log group. The default and recommended value is 2.

  • innodb_log_file_size defines the size in bytes of each log file in the log group. The combined size of log files (innodb_log_file_size * innodb_log_files_in_group) cannot exceed a maximum value that is slightly less than 512GB. A pair of 255 GB log files, for example, approaches the limit but does not exceed it. The default log file size is 48MB. Generally, the combined size of the log files should be large enough that the server can smooth out peaks and troughs in workload activity, which often means that there is enough redo log space to handle more than an hour of write activity. The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. For additional information, see Section 9.5.4, “Optimizing InnoDB Redo Logging”.

InnoDB Undo Tablespace Configuration

By default, InnoDB undo logs are part of the system tablespace. However, you can choose to store InnoDB undo logs in one or more separate undo tablespaces, typically on a different storage device.

The innodb_undo_directory configuration option defines the path where InnoDB creates separate tablespaces for the undo logs. This option is typically used in conjunction with the innodb_undo_logs and innodb_undo_tablespaces options, which determine the disk layout of the undo logs outside the system tablespace.

For more information, see Section 15.7.7, “Storing InnoDB Undo Logs in Separate Tablespaces”.

InnoDB Temporary Tablespace Configuration

By default, InnoDB creates a single auto-extending temporary tablespace data file named ibtmp1 that is slightly larger than 12MB in the innodb_data_home_dir directory. The default temporary tablespace data file configuration can be modified at startup using the innodb_temp_data_file_path configuration option.

The innodb_temp_data_file_path option specifies the path, file name, and file size for InnoDB temporary tablespace data files. The full directory path for a file is formed by concatenating innodb_data_home_dir to the path specified by innodb_temp_data_file_path. File size is specified in KB, MB, or GB (1024MB) by appending K, M, or G to the size value. The sum of the sizes of the files must be slightly larger than 12MB.

The innodb_data_home_dir default value is the MySQL data directory (datadir).

InnoDB Page Size Configuration

The innodb_page_size option specifies the page size for all InnoDB tablespaces in a MySQL instance. This value is set when the instance is created and remains constant afterward. Valid values are 64k, 32k, 16k (the default), 8k, and 4k. Alternatively, you can specify page size in bytes (65536, 32768, 16384, 8192, 4096).

The default page size of 16k is appropriate for a wide range of workloads, particularly for queries involving table scans and DML operations involving bulk updates. Smaller page sizes might be more efficient for OLTP workloads involving many small writes, where contention can be an issue when a single page contains many rows. Smaller pages might also be efficient with SSD storage devices, which typically use small block sizes. Keeping the InnoDB page size close to the storage device block size minimizes the amount of unchanged data that is rewritten to disk.

InnoDB Memory Configuration

MySQL allocates memory to various caches and buffers to improve performance of database operations. When allocating memory for InnoDB, always consider memory required by the operating system, memory allocated to other applications, and memory allocated for other MySQL buffers and caches. For example, if you use MyISAM tables, consider the amount of memory allocated for the key buffer (key_buffer_size). For an overview of MySQL buffers and caches, see Section 9.12.4.1, “How MySQL Uses Memory”.

Buffers specific to InnoDB are configured using the following parameters:

Warning

On 32-bit GNU/Linux x86, be careful not to set memory usage too high. glibc may permit the process heap to grow over thread stacks, which crashes your server. It is a risk if the memory allocated to the mysqld process for global and per-thread buffers and caches is close to or exceeds 2GB.

A formula similar to the following that calculates global and per-thread memory allocation for MySQL can be used to estimate MySQL memory usage. You may need to modify the formula to account for buffers and caches in your MySQL version and configuration. For an overview of MySQL buffers and caches, see Section 9.12.4.1, “How MySQL Uses Memory”.

innodb_buffer_pool_size
+ key_buffer_size
+ max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size)
+ max_connections*2MB

Each thread uses a stack (often 2MB, but only 256KB in MySQL binaries provided by Oracle Corporation.) and in the worst case also uses sort_buffer_size + read_buffer_size additional memory.

On Linux, if the kernel is enabled for large page support, InnoDB can use large pages to allocate memory for its buffer pool. See Section 9.12.4.2, “Enabling Large Page Support”.

15.6.2 Configuring InnoDB for Read-Only Operation

You can now query InnoDB tables where the MySQL data directory is on read-only media, by enabling the --innodb-read-only configuration option at server startup.

How to Enable

To prepare an instance for read-only operation, make sure all the necessary information is flushed to the data files before storing it on the read-only medium. Run the server with change buffering disabled (innodb_change_buffering=0) and do a slow shutdown.

To enable read-only mode for an entire MySQL instance, specify the following configuration options at server startup:

  • --innodb-read-only=1

  • If the instance is on read-only media such as a DVD or CD, or the /var directory is not writeable by all: --pid-file=path_on_writeable_media and --event-scheduler=disabled

Usage Scenarios

This mode of operation is appropriate in situations such as:

  • Distributing a MySQL application, or a set of MySQL data, on a read-only storage medium such as a DVD or CD.

  • Multiple MySQL instances querying the same data directory simultaneously, typically in a data warehousing configuration. You might use this technique to avoid bottlenecks that can occur with a heavily loaded MySQL instance, or you might use different configuration options for the various instances to tune each one for particular kinds of queries.

  • Querying data that has been put into a read-only state for security or data integrity reasons, such as archived backup data.

Note

This feature is mainly intended for flexibility in distribution and deployment, rather than raw performance based on the read-only aspect. See Section 9.5.3, “Optimizing InnoDB Read-Only Transactions” for ways to tune the performance of read-only queries, which do not require making the entire server read-only.

How It Works

When the server is run in read-only mode through the --innodb-read-only option, certain InnoDB features and components are reduced or turned off entirely:

  • No change buffering is done, in particular no merges from the change buffer. To make sure the change buffer is empty when you prepare the instance for read-only operation, disable change buffering (innodb_change_buffering=0) and do a slow shutdown first.

  • There is no crash recovery phase at startup. The instance must have performed a slow shutdown before being put into the read-only state.

  • Because the redo log is not used in read-only operation, you can set innodb_log_file_size to the smallest size possible (1 MB) before making the instance read-only.

  • All background threads other than I/O read threads are turned off. As a consequence, a read-only instance cannot encounter any deadlocks.

  • Information about deadlocks, monitor output, and so on is not written to temporary files. As a consequence, SHOW ENGINE INNODB STATUS does not produce any output.

  • If the MySQL server is started with --innodb-read-only but the data directory is still on writeable media, the root user can still perform DCL operations such as GRANT and REVOKE.

  • Changes to configuration option settings that would normally change the behavior of write operations, have no effect when the server is in read-only mode.

  • The MVCC processing to enforce isolation levels is turned off. All queries read the latest version of a record, because update and deletes are not possible.

  • The undo log is not used. Disable any settings for the innodb_undo_tablespaces and innodb_undo_directory configuration options.

15.6.3 InnoDB Buffer Pool Configuration

This section provides configuration and tuning information for the InnoDB buffer pool.

15.6.3.1 The InnoDB Buffer Pool

InnoDB maintains a storage area called the buffer pool for caching data and indexes in memory. Knowing how the InnoDB buffer pool works, and taking advantage of it to keep frequently accessed data in memory, is an important aspect of MySQL tuning. For information about how the InnoDB buffer pool works, see InnoDB Buffer Pool LRU Algorithm.

You can configure the various aspects of the InnoDB buffer pool to improve performance.

InnoDB Buffer Pool LRU Algorithm

InnoDB manages the buffer pool as a list, using a variation of the least recently used (LRU) algorithm. When room is needed to add a new page to the pool, InnoDB evicts the least recently used page and adds the new page to the middle of the list. This midpoint insertion strategy treats the list as two sublists:

  • At the head, a sublist of new (or young) pages that were accessed recently.

  • At the tail, a sublist of old pages that were accessed less recently.

This algorithm keeps pages that are heavily used by queries in the new sublist. The old sublist contains less-used pages; these pages are candidates for eviction.

The LRU algorithm operates as follows by default:

  • 3/8 of the buffer pool is devoted to the old sublist.

  • The midpoint of the list is the boundary where the tail of the new sublist meets the head of the old sublist.

  • When InnoDB reads a page into the buffer pool, it initially inserts it at the midpoint (the head of the old sublist). A page can be read in because it is required for a user-specified operation such as an SQL query, or as part of a read-ahead operation performed automatically by InnoDB.

  • Accessing a page in the old sublist makes it young, moving it to the head of the buffer pool (the head of the new sublist). If the page was read in because it was required, the first access occurs immediately and the page is made young. If the page was read in due to read-ahead, the first access does not occur immediately (and might not occur at all before the page is evicted).

  • As the database operates, pages in the buffer pool that are not accessed age by moving toward the tail of the list. Pages in both the new and old sublists age as other pages are made new. Pages in the old sublist also age as pages are inserted at the midpoint. Eventually, a page that remains unused for long enough reaches the tail of the old sublist and is evicted.

By default, pages read by queries immediately move into the new sublist, meaning they stay in the buffer pool longer. A table scan (such as performed for a mysqldump operation, or a SELECT statement with no WHERE clause) can bring a large amount of data into the buffer pool and evict an equivalent amount of older data, even if the new data is never used again. Similarly, pages that are loaded by the read-ahead background thread and then accessed only once move to the head of the new list. These situations can push frequently used pages to the old sublist, where they become subject to eviction. For information about optimizing this behavior, see Section 15.6.3.4, “Making the Buffer Pool Scan Resistant”, and Section 15.6.3.5, “Configuring InnoDB Buffer Pool Prefetching (Read-Ahead)”.

InnoDB Standard Monitor output contains several fields in the BUFFER POOL AND MEMORY section that pertain to operation of the buffer pool LRU algorithm. For details, see Section 15.6.3.9, “Monitoring the Buffer Pool Using the InnoDB Standard Monitor”.

InnoDB Buffer Pool Configuration Options

Several configuration options affect different aspects of the InnoDB buffer pool.

15.6.3.2 Configuring InnoDB Buffer Pool Size

You can configure InnoDB buffer pool size offline (at startup) or online, while the server is running. Behavior described in this section applies to both methods. For additional information about configuring buffer pool size online, see Configuring InnoDB Buffer Pool Size Online.

When increasing or decreasing innodb_buffer_pool_size, the operation is performed in chunks. Chunk size is defined by the innodb_buffer_pool_chunk_size configuration option, which has a default of 128M. For more information, see Configuring InnoDB Buffer Pool Chunk Size.

Buffer pool size must always be equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances. If you configure innodb_buffer_pool_size to a value that is not equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances, buffer pool size is automatically adjusted to a value that is equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances that is not less than the specified buffer pool size.

In the following example, innodb_buffer_pool_size is set to 8G, and innodb_buffer_pool_instances is set to 16. innodb_buffer_pool_chunk_size is 128M, which is the default value.

8G is a valid innodb_buffer_pool_size value because 8G is a multiple of innodb_buffer_pool_instances=16 * innodb_buffer_pool_chunk_size=128M, which is 2G.

shell> mysqld --innodb_buffer_pool_size=8G --innodb_buffer_pool_instances=16

mysql> SELECT @@innodb_buffer_pool_size/1024/1024/1024;
+------------------------------------------+
| @@innodb_buffer_pool_size/1024/1024/1024 |
+------------------------------------------+
|                           8.000000000000 |
+------------------------------------------+

In this example, innodb_buffer_pool_size is set to 9G, and innodb_buffer_pool_instances is set to 16. innodb_buffer_pool_chunk_size is 128M, which is the default value. In this case, 9G is not a multiple of innodb_buffer_pool_instances=16 * innodb_buffer_pool_chunk_size=128M, so innodb_buffer_pool_size is adjusted to 10G, which is the next multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances that is not less than the specified buffer pool size.

shell> mysqld --innodb_buffer_pool_size=9G --innodb_buffer_pool_instances=16

mysql> SELECT @@innodb_buffer_pool_size/1024/1024/1024;
+------------------------------------------+
| @@innodb_buffer_pool_size/1024/1024/1024 |
+------------------------------------------+
|                          10.000000000000 |
+------------------------------------------+
Configuring InnoDB Buffer Pool Chunk Size

innodb_buffer_pool_chunk_size can be increased or decreased in 1MB (1048576 byte) units but can only be modified at startup, in a command line string or in a MySQL configuration file.

Command line:

shell> mysqld --innodb_buffer_pool_chunk_size=134217728

Configuration file:

[mysqld]
innodb_buffer_pool_chunk_size=134217728

The following conditions apply when altering innodb_buffer_pool_chunk_size:

  • If the new innodb_buffer_pool_chunk_size value * innodb_buffer_pool_instances is larger than the current buffer pool size when the buffer pool is initialized, innodb_buffer_pool_chunk_size is truncated to innodb_buffer_pool_size / innodb_buffer_pool_instances.

    For example, if the buffer pool is initialized with a size of 2GB (2147483648 bytes), 4 buffer pool instances, and a chunk size of 1GB (1073741824 bytes), chunk size is truncated to a value equal to innodb_buffer_pool_size / innodb_buffer_pool_instances, as shown below:

    shell> mysqld --innodb_buffer_pool_size=2147483648 --innodb_buffer_pool_instances=4
    --innodb_buffer_pool_chunk_size=1073741824;
    
    mysql> SELECT @@innodb_buffer_pool_size;
    +---------------------------+
    | @@innodb_buffer_pool_size |
    +---------------------------+
    |                2147483648 |
    +---------------------------+
    
    mysql> SELECT @@innodb_buffer_pool_instances;
    +--------------------------------+
    | @@innodb_buffer_pool_instances |
    +--------------------------------+
    |                              4 |
    +--------------------------------+
    
    # Chunk size was set to 1GB (1073741824 bytes) on startup but was
    # truncated to innodb_buffer_pool_size / innodb_buffer_pool_instances
    
    mysql> SELECT @@innodb_buffer_pool_chunk_size;
    +---------------------------------+
    | @@innodb_buffer_pool_chunk_size |
    +---------------------------------+
    |                       536870912 |
    +---------------------------------+
  • Buffer pool size must always be equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances. If you alter innodb_buffer_pool_chunk_size, innodb_buffer_pool_size is automatically adjusted to a value that is equal to or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances that is not less than current buffer pool size. The adjustment occurs when the buffer pool is initialized. This behavior is demonstrated in the following example:

    # The buffer pool has a default size of 128MB (134217728 bytes)
    
    mysql> SELECT @@innodb_buffer_pool_size;
    +---------------------------+
    | @@innodb_buffer_pool_size |
    +---------------------------+
    |                 134217728 |
    +---------------------------+
    
    # The chunk size is also 128MB (134217728 bytes)
    
    mysql> SELECT @@innodb_buffer_pool_chunk_size;
    +---------------------------------+
    | @@innodb_buffer_pool_chunk_size |
    +---------------------------------+
    |                       134217728 |
    +---------------------------------+
    
    # There is a single buffer pool instance
    
    mysql> SELECT @@innodb_buffer_pool_instances;
    +--------------------------------+
    | @@innodb_buffer_pool_instances |
    +--------------------------------+
    |                              1 |
    +--------------------------------+
    
    # Chunk size is decreased by 1MB (1048576 bytes) at startup
    # (134217728 - 1048576 = 133169152):
    
    shell> mysqld --innodb_buffer_pool_chunk_size=133169152
    
    mysql> SELECT @@innodb_buffer_pool_chunk_size;
    +---------------------------------+
    | @@innodb_buffer_pool_chunk_size |
    +---------------------------------+
    |                       133169152 |
    +---------------------------------+
    
    # Buffer pool size increases from 134217728 to 266338304
    # Buffer pool size is automatically adjusted to a value that is equal to
    # or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances
    # that is not less than current buffer pool size
    
    mysql> SELECT @@innodb_buffer_pool_size;
    +---------------------------+
    | @@innodb_buffer_pool_size |
    +---------------------------+
    |                 266338304 |
    +---------------------------+

    This example demonstrates the same behavior but with multiple buffer pool instances:

    # The buffer pool has a default size of 2GB (2147483648 bytes)
    
    mysql> SELECT @@innodb_buffer_pool_size;
    +---------------------------+
    | @@innodb_buffer_pool_size |
    +---------------------------+
    |                2147483648 |
    +---------------------------+
    
    # The chunk size is .5 GB (536870912 bytes)
    
    mysql> SELECT @@innodb_buffer_pool_chunk_size;
    +---------------------------------+
    | @@innodb_buffer_pool_chunk_size |
    +---------------------------------+
    |                       536870912 |
    +---------------------------------+
    
    # There are 4 buffer pool instances
    
    mysql> SELECT @@innodb_buffer_pool_instances;
    +--------------------------------+
    | @@innodb_buffer_pool_instances |
    +--------------------------------+
    |                              4 |
    +--------------------------------+
    
    # Chunk size is decreased by 1MB (1048576 bytes) at startup
    # (536870912 - 1048576 = 535822336):
    
    shell> mysqld --innodb_buffer_pool_chunk_size=535822336
    
    mysql> SELECT @@innodb_buffer_pool_chunk_size;
    +---------------------------------+
    | @@innodb_buffer_pool_chunk_size |
    +---------------------------------+
    |                       535822336 |
    +---------------------------------+
    
    # Buffer pool size increases from 2147483648 to 4286578688
    # Buffer pool size is automatically adjusted to a value that is equal to
    # or a multiple of innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances
    # that is not less than current buffer pool size of 2147483648
    
    mysql> SELECT @@innodb_buffer_pool_size;
    +---------------------------+
    | @@innodb_buffer_pool_size |
    +---------------------------+
    |                4286578688 |
    +---------------------------+

    Care should be taken when changing innodb_buffer_pool_chunk_size, as changing this value can increase the size of the buffer pool, as shown in the examples above. Before you change innodb_buffer_pool_chunk_size, calculate the effect on innodb_buffer_pool_size to ensure that the resulting buffer pool size is acceptable.

Note

To avoid potential performance issues, the number of chunks (innodb_buffer_pool_size / innodb_buffer_pool_chunk_size) should not exceed 1000.

Configuring InnoDB Buffer Pool Size Online

The innodb_buffer_pool_size configuration option can be set dynamically using a SET statement, allowing you to resize the buffer pool without restarting the server. For example:

mysql> SET GLOBAL innodb_buffer_pool_size=402653184;

Active transactions and operations performed through InnoDB APIs should be completed before resizing the buffer pool. When initiating a resizing operation, the operation does not start until all active transactions are completed. Once the resizing operation is in progress, new transactions and operations that require access to the buffer pool must wait until the resizing operation finishes. The exception to the rule is that concurrent access to the buffer pool is permitted while the buffer pool is defragmented and pages are withdrawn when buffer pool size is decreased. A drawback of allowing concurrent access is that it could result in a temporary shortage of available pages while pages are being withdrawn.

Note

Nested transactions could fail if initiated after the buffer pool resizing operation begins.

Monitoring Online Buffer Pool Resizing Progress

The Innodb_buffer_pool_resize_status reports buffer pool resizing progress. For example:

mysql> SHOW STATUS WHERE Variable_name='InnoDB_buffer_pool_resize_status';
+----------------------------------+----------------------------------+
| Variable_name                    | Value                            |
+----------------------------------+----------------------------------+
| Innodb_buffer_pool_resize_status | Resizing also other hash tables. |
+----------------------------------+----------------------------------+

Buffer pool resizing progress is also logged in the server error log file. This example shows notes that are logged when increasing the size of the buffer pool:

[Note] InnoDB: Resizing buffer pool from 134217728 to 4294967296. (unit=134217728)
[Note] InnoDB: disabled adaptive hash index.
[Note] InnoDB: buffer pool 0 : 31 chunks (253952 blocks) was added.
[Note] InnoDB: buffer pool 0 : hash tables were resized.
[Note] InnoDB: Resized hash tables at lock_sys, adaptive hash index, dictionary.
[Note] InnoDB: completed to resize buffer pool from 134217728 to 4294967296.
[Note] InnoDB: re-enabled adaptive hash index.

This example shows notes that are logged when decreasing the size of the buffer pool:

[Note] InnoDB: Resizing buffer pool from 4294967296 to 134217728. (unit=134217728)
[Note] InnoDB: disabled adaptive hash index.
[Note] InnoDB: buffer pool 0 : start to withdraw the last 253952 blocks.
[Note] InnoDB: buffer pool 0 : withdrew 253952 blocks from free list. tried to relocate 0 pages.
(253952/253952)
[Note] InnoDB: buffer pool 0 : withdrawn target 253952 blocks.
[Note] InnoDB: buffer pool 0 : 31 chunks (253952 blocks) was freed.
[Note] InnoDB: buffer pool 0 : hash tables were resized.
[Note] InnoDB: Resized hash tables at lock_sys, adaptive hash index, dictionary.
[Note] InnoDB: completed to resize buffer pool from 4294967296 to 134217728.
[Note] InnoDB: re-enabled adaptive hash index.
Online Buffer Pool Resizing Internals

The resizing operation is performed by a background thread. When increasing the size of the buffer pool, the resizing operation:

  • Adds pages in chunks (chunk size is defined by innodb_buffer_pool_chunk_size)

  • Coverts hash tables, lists, and pointers to use new addresses in memory

  • Adds new pages to the free list

While these operations are in progress, other threads are blocked from accessing the buffer pool.

When decreasing the size of the buffer pool, the resizing operation:

  • Defragments the buffer pool and withdraws (frees) pages

  • Removes pages in chunks (chunk size is defined by innodb_buffer_pool_chunk_size)

  • Converts hash tables, lists, and pointers to use new addresses in memory

Of these operations, only defragmenting the buffer pool and withdrawing pages allow other threads to access to the buffer pool concurrently.

15.6.3.3 Configuring Multiple Buffer Pool Instances

For systems with buffer pools in the multi-gigabyte range, dividing the buffer pool into separate instances can improve concurrency, by reducing contention as different threads read and write to cached pages. This feature is typically intended for systems with a buffer pool size in the multi-gigabyte range. Multiple buffer pool instances are configured using the innodb_buffer_pool_instances configuration option, and you might also adjust the innodb_buffer_pool_size value.

When the InnoDB buffer pool is large, many data requests can be satisfied by retrieving from memory. You might encounter bottlenecks from multiple threads trying to access the buffer pool at once. You can enable multiple buffer pools to minimize this contention. Each page that is stored in or read from the buffer pool is assigned to one of the buffer pools randomly, using a hashing function. Each buffer pool manages its own free lists, flush lists, LRUs, and all other data structures connected to a buffer pool, and is protected by its own buffer pool mutex.

To enable multiple buffer pool instances, set the innodb_buffer_pool_instances configuration option to a value greater than 1 (the default) up to 64 (the maximum). This option takes effect only when you set innodb_buffer_pool_size to a size of 1GB or more. The total size you specify is divided among all the buffer pools. For best efficiency, specify a combination of innodb_buffer_pool_instances and innodb_buffer_pool_size so that each buffer pool instance is at least 1GB.

For information about modifying InnoDB buffer pool size, see Section 15.6.3.2, “Configuring InnoDB Buffer Pool Size”.

15.6.3.4 Making the Buffer Pool Scan Resistant

Rather than using a strict LRU algorithm, InnoDB uses a technique to minimize the amount of data that is brought into the buffer pool and never accessed again. The goal is to make sure that frequently accessed (hot) pages remain in the buffer pool, even as read-ahead and full table scans bring in new blocks that might or might not be accessed afterward.

Newly read blocks are inserted into the middle of the LRU list. All newly read pages are inserted at a location that by default is 3/8 from the tail of the LRU list. The pages are moved to the front of the list (the most-recently used end) when they are accessed in the buffer pool for the first time. Thus, pages that are never accessed never make it to the front portion of the LRU list, and age out sooner than with a strict LRU approach. This arrangement divides the LRU list into two segments, where the pages downstream of the insertion point are considered old and are desirable victims for LRU eviction.

For an explanation of the inner workings of the InnoDB buffer pool and specifics about the LRU algorithm, see Section 15.6.3.1, “The InnoDB Buffer Pool”.

You can control the insertion point in the LRU list and choose whether InnoDB applies the same optimization to blocks brought into the buffer pool by table or index scans. The configuration parameter innodb_old_blocks_pct controls the percentage of old blocks in the LRU list. The default value of innodb_old_blocks_pct is 37, corresponding to the original fixed ratio of 3/8. The value range is 5 (new pages in the buffer pool age out very quickly) to 95 (only 5% of the buffer pool is reserved for hot pages, making the algorithm close to the familiar LRU strategy).

The optimization that keeps the buffer pool from being churned by read-ahead can avoid similar problems due to table or index scans. In these scans, a data page is typically accessed a few times in quick succession and is never touched again. The configuration parameter innodb_old_blocks_time specifies the time window (in milliseconds) after the first access to a page during which it can be accessed without being moved to the front (most-recently used end) of the LRU list. The default value of innodb_old_blocks_time is 1000. Increasing this value makes more and more blocks likely to age out faster from the buffer pool.

Both innodb_old_blocks_pct and innodb_old_blocks_time are dynamic, global and can be specified in the MySQL option file (my.cnf or my.ini) or changed at runtime with the SET GLOBAL command. Changing the setting requires the SUPER privilege.

To help you gauge the effect of setting these parameters, the SHOW ENGINE INNODB STATUS command reports buffer pool statistics. For details, see Section 15.6.3.9, “Monitoring the Buffer Pool Using the InnoDB Standard Monitor”.

Because the effects of these parameters can vary widely based on your hardware configuration, your data, and the details of your workload, always benchmark to verify the effectiveness before changing these settings in any performance-critical or production environment.

In mixed workloads where most of the activity is OLTP type with periodic batch reporting queries which result in large scans, setting the value of innodb_old_blocks_time during the batch runs can help keep the working set of the normal workload in the buffer pool.

When scanning large tables that cannot fit entirely in the buffer pool, setting innodb_old_blocks_pct to a small value keeps the data that is only read once from consuming a significant portion of the buffer pool. For example, setting innodb_old_blocks_pct=5 restricts this data that is only read once to 5% of the buffer pool.

When scanning small tables that do fit into memory, there is less overhead for moving pages around within the buffer pool, so you can leave innodb_old_blocks_pct at its default value, or even higher, such as innodb_old_blocks_pct=50.

The effect of the innodb_old_blocks_time parameter is harder to predict than the innodb_old_blocks_pct parameter, is relatively small, and varies more with the workload. To arrive at an optimal value, conduct your own benchmarks if the performance improvement from adjusting innodb_old_blocks_pct is not sufficient.

15.6.3.5 Configuring InnoDB Buffer Pool Prefetching (Read-Ahead)

A read-ahead request is an I/O request to prefetch multiple pages in the buffer pool asynchronously, in anticipation that these pages will be needed soon. The requests bring in all the pages in one extent. InnoDB uses two read-ahead algorithms to improve I/O performance:

Linear read-ahead is a technique that predicts what pages might be needed soon based on pages in the buffer pool being accessed sequentially. You control when InnoDB performs a read-ahead operation by adjusting the number of sequential page accesses required to trigger an asynchronous read request, using the configuration parameter innodb_read_ahead_threshold. Before this parameter was added, InnoDB would only calculate whether to issue an asynchronous prefetch request for the entire next extent when it read in the last page of the current extent.

The configuration parameter innodb_read_ahead_threshold controls how sensitive InnoDB is in detecting patterns of sequential page access. If the number of pages read sequentially from an extent is greater than or equal to innodb_read_ahead_threshold, InnoDB initiates an asynchronous read-ahead operation of the entire following extent. innodb_read_ahead_threshold can be set to any value from 0-64. The default value is 56. The higher the value, the more strict the access pattern check. For example, if you set the value to 48, InnoDB triggers a linear read-ahead request only when 48 pages in the current extent have been accessed sequentially. If the value is 8, InnoDB triggers an asynchronous read-ahead even if as few as 8 pages in the extent are accessed sequentially. You can set the value of this parameter in the MySQL configuration file, or change it dynamically with the SET GLOBAL command, which requires the SUPER privilege.

Random read-ahead is a technique that predicts when pages might be needed soon based on pages already in the buffer pool, regardless of the order in which those pages were read. If 13 consecutive pages from the same extent are found in the buffer pool, InnoDB asynchronously issues a request to prefetch the remaining pages of the extent. To enable this feature, set the configuration variable innodb_random_read_ahead to ON.

The SHOW ENGINE INNODB STATUS command displays statistics to help you evaluate the effectiveness of the read-ahead algorithm. Statistics include counter information for the following global status variables:

This information can be useful when fine-tuning the innodb_random_read_ahead setting.

For more information about I/O performance, see Section 9.5.8, “Optimizing InnoDB Disk I/O” and Section 9.12.2, “Optimizing Disk I/O”.

15.6.3.6 Configuring InnoDB Buffer Pool Flushing

InnoDB performs certain tasks in the background, including flushing of dirty pages (those pages that have been changed but are not yet written to the database files) from the buffer pool.

InnoDB starts flushing buffer pool pages when the percentage of dirty pages in the buffer pool reaches the low water mark setting defined by innodb_max_dirty_pages_pct_lwm. This option is intended to control the ratio of dirty pages in the buffer pool and ideally prevent the percentage of dirty pages from reaching innodb_max_dirty_pages_pct. If the percentage of dirty pages in the buffer pool exceeds innodb_max_dirty_pages_pct, InnoDB begins to aggressively flush buffer pool pages.

InnoDB uses an algorithm to estimate the required rate of flushing, based on the speed of redo log generation and the current rate of flushing. The intent is to smooth overall performance by ensuring that buffer flush activity keeps up with the need to keep the buffer pool clean. Automatically adjusting the rate of flushing can help to avoid sudden dips in throughput, when excessive buffer pool flushing limits the I/O capacity available for ordinary read and write activity.

InnoDB uses its log files in a circular fashion. Before reusing a portion of a log file, InnoDB flushes to disk all dirty buffer pool pages whose redo entries are contained in that portion of the log file, a process known as a sharp checkpoint. If a workload is write-intensive, it generates a lot of redo information, all written to the log file. If all available space in the log files is used up, a sharp checkpoint occurs, causing a temporary reduction in throughput. This situation can happen even if innodb_max_dirty_pages_pct is not reached.

InnoDB uses a heuristic-based algorithm to avoid such a scenario, by measuring the number of dirty pages in the buffer pool and the rate at which redo is being generated. Based on these numbers, InnoDB decides how many dirty pages to flush from the buffer pool each second. This self-adapting algorithm is able to deal with sudden changes in workload.

Internal benchmarking has shown that this algorithm not only maintains throughput over time, but can also improve overall throughput significantly.

Because adaptive flushing can significantly affect the I/O pattern of a workload, the innodb_adaptive_flushing configuration parameter lets you turn off this feature. The default value for innodb_adaptive_flushing is ON, enabling the adaptive flushing algorithm. You can set the value of this parameter in the MySQL option file (my.cnf or my.ini) or change it dynamically with the SET GLOBAL command, which requires the SUPER privilege.

For information about fine-tuning InnoDB buffer pool flushing behavior, see Section 15.6.3.7, “Fine-tuning InnoDB Buffer Pool Flushing”.

For more information about InnoDB I/O performance, see Section 9.5.8, “Optimizing InnoDB Disk I/O”.

15.6.3.7 Fine-tuning InnoDB Buffer Pool Flushing

The configuration options innodb_flush_neighbors and innodb_lru_scan_depth let you fine-tune certain aspects of the flushing process for the InnoDB buffer pool. These options primarily help write-intensive workloads. With heavy DML activity, flushing can fall behind if it is not aggressive enough, resulting in excessive memory use in the buffer pool; or, disk writes due to flushing can saturate your I/O capacity if that mechanism is too aggressive. The ideal settings depend on your workload, data access patterns, and storage configuration (for example, whether data is stored on HDD or SSD devices).

For systems with constant heavy workloads, or workloads that fluctuate widely, several configuration options let you fine-tune the flushing behavior for InnoDB tables:

These options feed into the formula used by the innodb_adaptive_flushing option.

The innodb_adaptive_flushing, innodb_io_capacity and innodb_max_dirty_pages_pct options are limited or extended by the following options:

The InnoDB adaptive flushing mechanism is not appropriate in all cases. It gives the most benefit when the redo log is in danger of filling up. The innodb_adaptive_flushing_lwm option specifies a low water mark percentage of redo log capacity; when that threshold is crossed, InnoDB turns on adaptive flushing even if not specified by the innodb_adaptive_flushing option.

If flushing activity falls far behind, InnoDB can flush more aggressively than specified by innodb_io_capacity. innodb_io_capacity_max represents an upper limit on the I/O capacity used in such emergency situations, so that the spike in I/O does not consume all the capacity of the server.

InnoDB tries to flush data from the buffer pool so that the percentage of dirty pages does not exceed the value of innodb_max_dirty_pages_pct. The default value for innodb_max_dirty_pages_pct is 75.

Note

The innodb_max_dirty_pages_pct setting establishes a target for flushing activity. It does not affect the rate of flushing. For information about managing the rate of flushing, see Section 15.6.3.6, “Configuring InnoDB Buffer Pool Flushing”.

The innodb_max_dirty_pages_pct_lwm option specifies a low water mark value that represents the percentage of dirty pages where pre-flushing is enabled to control the dirty page ratio and ideally prevent the percentage of dirty pages from reaching innodb_max_dirty_pages_pct. A value of innodb_max_dirty_pages_pct_lwm=0 disables the pre-flushing behavior.

Most of the options referenced above are most applicable to servers that run write-heavy workloads for long periods of time and have little reduced load time to catch up with changes waiting to be written to disk.

innodb_flushing_avg_loops defines the number of iterations for which InnoDB keeps the previously calculated snapshot of the flushing state, which controls how quickly adaptive flushing responds to foreground load changes. Setting a high value for innodb_flushing_avg_loops means that InnoDB keeps the previously calculated snapshot longer, so adaptive flushing responds more slowly. A high value also reduces positive feedback between foreground and background work, but when setting a high value it is important to ensure that InnoDB redo log utilization does not reach 75% (the hardcoded limit at which async flushing starts) and that the innodb_max_dirty_pages_pct setting keeps the number of dirty pages to a level that is appropriate for the workload.

Systems with consistent workloads, a large innodb_log_file_size, and small spikes that do not reach 75% redo log space utilization should use a high innodb_flushing_avg_loops value to keep flushing as smooth as possible. For systems with extreme load spikes or log files that do not provide a lot of space, consider a smaller innodb_flushing_avg_loops value. A smaller value allows flushing to closely track the load and helps avoid reaching 75% redo log space utilization.

15.6.3.8 Saving and Restoring the Buffer Pool State

To reduce the warmup period after restarting the server, InnoDB saves a percentage of the most recently used pages for each buffer pool at server shutdown and restores these pages at server startup. The percentage of recently used pages that is stored is defined by the innodb_buffer_pool_dump_at_shutdown configuration option.

After restarting a busy server, there is typically a warmup period with steadily increasing throughput, as disk pages that were in the buffer pool are brought back into memory (as the same data is queried, updated, and so on). The ability to restore the buffer pool at startup shortens the warmup period by reloading disk pages that were in the buffer pool before the restart rather than waiting for DML operations to access corresponding rows. Also, I/O requests can be performed in large batches, making the overall I/O faster. Page loading happens in the background, and does not delay database startup.

In addition to saving the buffer pool state at shutdown and restoring it at startup, you can save and restore the buffer pool state at any time, while the server is running. For example, you can save the state of the buffer pool after reaching a stable throughput under a steady workload. You could also restore the previous buffer pool state after running reports or maintenance jobs that bring data pages into the buffer pool that are only requited for those operations, or after running some other non-typical workload.

Even though a buffer pool can be many gigabytes in size, the buffer pool data that InnoDB saves to disk is tiny by comparison. Only tablespace IDs and page IDs necessary to locate the appropriate pages are saved to disk. This information is derived from the INNODB_BUFFER_PAGE_LRU INFORMATION_SCHEMA table. By default, tablespace ID and page ID data is saved in a file named ib_buffer_pool, which is saved to the InnoDB data directory. The file name and location can be modified using the innodb_buffer_pool_filename configuration parameter.

Because data is cached in and aged out of the buffer pool as it is with regular database operations, there is no problem if the disk pages are recently updated, or if a DML operation involves data that has not yet been loaded. The loading mechanism skips requested pages that no longer exist.

The underlying mechanism involves a background thread that is dispatched to perform the dump and load operations.

Disk pages from compressed tables are loaded into the buffer pool in their compressed form. Pages are uncompressed as usual when page contents are accessed during DML operations. Because uncompressing pages is a CPU-intensive process, it is more efficient for concurrency to perform the operation in a connection thread rather than in the single thread that performs the buffer pool restore operation.

Operations related to saving and restoring the buffer pool state are described in the following topics:

Configuring the Dump Percentage for Buffer Pool Pages

Before dumping pages from the buffer pool, you can configure the percentage of most-recently-used buffer pool pages that you want to dump by setting the innodb_buffer_pool_dump_pct option. If you plan to dump buffer pool pages while the server is running, you can configure the option dynamically:

SET GLOBAL innodb_buffer_pool_dump_pct=40;

If you plan to dump buffer pool pages at server shutdown, set innodb_buffer_pool_dump_pct in your configuration file.

[mysqld]
      innodb_buffer_pool_dump_pct=40

The innodb_buffer_pool_dump_pct default value was changed from 100 (dump all pages) to 25 (dump 25% of most-recently-used pages) in MySQL 5.7 when innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup were enabled by default.

Saving the Buffer Pool State at Shutdown and Restoring it at Startup

To save the state of the buffer pool at server shutdown, issue the following statement prior to shutting down the server:

SET GLOBAL innodb_buffer_pool_dump_at_shutdown=ON;

innodb_buffer_pool_dump_at_shutdown is enabled by default.

To restore the buffer pool state at server startup, specify the --innodb_buffer_pool_load_at_startup option when starting the server:

mysqld --innodb_buffer_pool_load_at_startup=ON;

innodb_buffer_pool_load_at_startup is enabled by default.

Saving and Restoring the Buffer Pool State Online

To save the state of the buffer pool while MySQL server is running, issue the following statement:

SET GLOBAL innodb_buffer_pool_dump_now=ON;

To restore the buffer pool state while MySQL is running, issue the following statement:

SET GLOBAL innodb_buffer_pool_load_now=ON;
Displaying Buffer Pool Dump Progress

To display progress when saving the buffer pool state to disk, issue the following statement:

SHOW STATUS LIKE 'Innodb_buffer_pool_dump_status';

If the operation has not yet started, not started is returned. If the operation is complete, the completion time is printed (e.g. Finished at 110505 12:18:02). If the operation is in progress, status information is provided (e.g. Dumping buffer pool 5/7, page 237/2873).

Displaying Buffer Pool Load Progress

To display progress when loading the buffer pool, issue the following statement:

SHOW STATUS LIKE 'Innodb_buffer_pool_load_status';

If the operation has not yet started, not started is returned. If the operation is complete, the completion time is printed (e.g. Finished at 110505 12:23:24). If the operation is in progress, status information is provided (e.g. Loaded 123/22301 pages).

Aborting a Buffer Pool Load Operation

To abort a buffer pool load operation, issue the following statement:

SET GLOBAL innodb_buffer_pool_load_abort=ON;
Monitoring Buffer Pool Load Progress Using Performance Schema

You can monitor buffer pool load progress using Performance Schema.

The following example demonstrates how to enable the stage/innodb/buffer pool load stage event instrument and related consumer tables to monitor buffer pool load progress.

For information about buffer pool dump and load procedures used in this example, see Section 15.6.3.8, “Saving and Restoring the Buffer Pool State”. For information about Performance Schema stage event instruments and related consumers, see Section 25.10.5, “Performance Schema Stage Event Tables”.

  1. Enable the stage/innodb/buffer pool load instrument:

    mysql> UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' 
           WHERE NAME LIKE 'stage/innodb/buffer%';
    
  2. Enable the stage event consumer tables, which include events_stages_current, events_stages_history, and events_stages_history_long.

    mysql> UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' 
           WHERE NAME LIKE '%stages%';
    
  3. Dump the current buffer pool state by enabling innodb_buffer_pool_dump_now.

    mysql> SET GLOBAL innodb_buffer_pool_dump_now=ON;
    
  4. Check the buffer pool dump status to ensure that the operation has completed.

    mysql> SHOW STATUS LIKE 'Innodb_buffer_pool_dump_status'\G
    *************************** 1. row ***************************
    Variable_name: Innodb_buffer_pool_dump_status
            Value: Buffer pool(s) dump completed at 150202 16:38:58
    
  5. Load the buffer pool by enabling innodb_buffer_pool_load_now:

    mysql> SET GLOBAL innodb_buffer_pool_load_now=ON;
    
  6. Check the current status of the buffer pool load operation by querying the Performance Schema events_stages_current table. The WORK_COMPLETED column shows the number of buffer pool pages loaded. The WORK_ESTIMATED column provides an estimate of the remaining work, in pages.

    mysql> SELECT EVENT_NAME, WORK_COMPLETED, WORK_ESTIMATED
           FROM performance_schema.events_stages_current;
    +-------------------------------+----------------+----------------+
    | EVENT_NAME                    | WORK_COMPLETED | WORK_ESTIMATED |
    +-------------------------------+----------------+----------------+
    | stage/innodb/buffer pool load |           5353 |           7167 |
    +-------------------------------+----------------+----------------+
    

    The events_stages_current table returns an empty set if the buffer pool load operation has completed. In this case, you can check the events_stages_history table to view data for the completed event. For example:

    mysql> SELECT EVENT_NAME, WORK_COMPLETED, WORK_ESTIMATED 
           FROM performance_schema.events_stages_history;
    +-------------------------------+----------------+----------------+
    | EVENT_NAME                    | WORK_COMPLETED | WORK_ESTIMATED |
    +-------------------------------+----------------+----------------+
    | stage/innodb/buffer pool load |           7167 |           7167 |
    +-------------------------------+----------------+----------------+
    
Note

You can also monitor buffer pool load progress using Performance Schema when loading the buffer pool at startup using innodb_buffer_pool_load_at_startup. In this case, the stage/innodb/buffer pool load instrument and related consumers must be enabled at startup. For more information, see Section 25.2.2, “Performance Schema Startup Configuration”.

15.6.3.9 Monitoring the Buffer Pool Using the InnoDB Standard Monitor

InnoDB Standard Monitor output, which can be accessed using SHOW ENGINE INNODB STATUS, provides metrics that pertain to operation of the InnoDB buffer pool. Buffer pool metrics are located in the BUFFER POOL AND MEMORY section of InnoDB Standard Monitor output and appear similar to the following:

----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 2198863872
Dictionary memory allocated 776332
Buffer pool size   131072
Free buffers       124908
Database pages     5720
Old database pages 2071
Modified db pages  910
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 4, not young 0
0.10 youngs/s, 0.00 non-youngs/s
Pages read 197, created 5523, written 5060
0.00 reads/s, 190.89 creates/s, 244.94 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not
0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read
ahead 0.00/s
LRU len: 5720, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]

The following table describes InnoDB buffer pool metrics reported by the InnoDB Standard Monitor.

Note

Per second averages provided in InnoDB Standard Monitor output are based on the elapsed time since InnoDB Standard Monitor output was last printed.

Table 15.2 InnoDB Buffer Pool Metrics

NameDescription
Total memory allocatedThe total memory allocated for the buffer pool in bytes.
Dictionary memory allocatedThe total memory allocated for the InnoDB data dictionary in bytes.
Buffer pool sizeThe total size in pages allocated to the buffer pool.
Free buffersThe total size in pages of the buffer pool free list.
Database pagesThe total size in pages of the buffer pool LRU list.
Old database pagesThe total size in pages of the buffer pool old LRU sublist.
Modified db pagesThe current number of pages modified in the buffer pool.
Pending readsThe number of buffer pool pages waiting to be read in to the buffer pool.
Pending writes LRUThe number of old dirty pages within the buffer pool to be written from the bottom of the LRU list.
Pending writes flush listThe number of buffer pool pages to be flushed during checkpointing.
Pending writes single pageThe number of pending independent page writes within the buffer pool.
Pages made youngThe total number of pages made young in the buffer pool LRU list (moved to the head of sublist of new pages).
Pages made not youngThe total number of pages not made young in the buffer pool LRU list (pages that have remained in the old sublist without being made young).
youngs/sThe per second average of accesses to old pages in the buffer pool LRU list that have resulted in making pages young. See the notes that follow this table for more information.
non-youngs/sThe per second average of accesses to old pages in the buffer pool LRU list that have resulted in not making pages young. See the notes that follow this table for more information.
Pages readThe total number of pages read from the buffer pool.
Pages createdThe total number of pages created within the buffer pool.
Pages writtenThe total number of pages written from the buffer pool.
reads/sThe per second average number of buffer pool page reads per second.
creates/sThe per second average number of buffer pool pages created per second.
writes/sThe per second average number of buffer pool page writes per second.
Buffer pool hit rateThe buffer pool page hit rate for pages read from the buffer pool memory vs from disk storage.
young-making rateThe average hit rate at which page accesses have resulted in making pages young. See the notes that follow this table for more information.
not (young-making rate)The average hit rate at which page accesses have not resulted in making pages young. See the notes that follow this table for more information.
Pages read aheadThe per second average of read ahead operations.
Pages evicted without accessThe per second average of the pages evicted without being accessed from the buffer pool.
Random read aheadThe per second average of random read ahead operations.
LRU lenThe total size in pages of the buffer pool LRU list.
unzip_LRU lenThe total size in pages of the buffer pool unzip_LRU list.
I/O sumThe total number of buffer pool LRU list pages accessed, for the last 50 seconds.
I/O curThe total number of buffer pool LRU list pages accessed.
I/O unzip sumThe total number of buffer pool unzip_LRU list pages accessed.
I/O unzip curThe total number of buffer pool unzip_LRU list pages accessed.

Notes:

  • The youngs/s metric only relates to old pages. It is based on the number of accesses to pages and not the number of pages. There can be multiple accesses to a given page, all of which are counted. If you see very low youngs/s values when there are no large scans occurring, you might need to reduce the delay time or increase the percentage of the buffer pool used for the old sublist. Increasing the percentage makes the old sublist larger, so pages in that sublist take longer to move to the tail and to be evicted. This increases the likelihood that the pages will be accessed again and be made young.

  • The non-youngs/s metric only relates to old pages. It is based on the number of accesses to pages and not the number of pages. There can be multiple accesses to a given page, all of which are counted. If you do not see a lot of non-youngs/s when you are doing large table scans (and lots of youngs/s), increase the delay value.

  • The young-making rate accounts for accesses to all buffer pool pages, not just accesses to pages in the old sublist. The young-making rate and not rate do not normally add up to the overall buffer pool hit rate. Page hits in the old sublist cause pages to move to the new sublist, but page hits in the new sublist cause pages to move to the head of the list only if they are a certain distance from the head.

  • not (young-making rate) is the average hit rate at which page accesses have not resulted in making pages young due to the delay defined by innodb_old_blocks_time not being met, or due to page hits in the new sublist that did not result in pages being moved to the head. This rate accounts for accesses to all buffer pool pages, not just accesses to pages in the old sublist.

InnoDB buffer pool server status variables and the INNODB_BUFFER_POOL_STATS table provide many of the same buffer pool metrics found in InnoDB Standard Monitor output. For more information about the INNODB_BUFFER_POOL_STATS table, see Example 15.19, “Querying the INNODB_BUFFER_POOL_STATS Table”.

15.6.4 Configuring the Memory Allocator for InnoDB

When InnoDB was developed, the memory allocators supplied with operating systems and run-time libraries were often lacking in performance and scalability. At that time, there were no memory allocator libraries tuned for multi-core CPUs. Therefore, InnoDB implemented its own memory allocator in the mem subsystem. This allocator is guarded by a single mutex, which may become a bottleneck. InnoDB also implements a wrapper interface around the system allocator (malloc and free) that is likewise guarded by a single mutex.

Today, as multi-core systems have become more widely available, and as operating systems have matured, significant improvements have been made in the memory allocators provided with operating systems. These new memory allocators perform better and are more scalable than they were in the past. Most workloads, especially those where memory is frequently allocated and released (such as multi-table joins), benefit from using a more highly tuned memory allocator as opposed to the internal, InnoDB-specific memory allocator.

You can control whether InnoDB uses its own memory allocator or an allocator of the operating system, by setting the value of the system configuration parameter innodb_use_sys_malloc in the MySQL option file (my.cnf or my.ini). If set to ON or 1 (the default), InnoDB uses the malloc and free functions of the underlying system rather than manage memory pools itself. This parameter is not dynamic, and takes effect only when the system is started. To continue to use the InnoDB memory allocator, set innodb_use_sys_malloc to 0.

When the InnoDB memory allocator is disabled, InnoDB ignores the value of the parameter innodb_additional_mem_pool_size. The InnoDB memory allocator uses an additional memory pool for satisfying allocation requests without having to fall back to the system memory allocator. When the InnoDB memory allocator is disabled, all such allocation requests are fulfilled by the system memory allocator.

On Unix-like systems that use dynamic linking, replacing the memory allocator may be as easy as making the environment variable LD_PRELOAD or LD_LIBRARY_PATH point to the dynamic library that implements the allocator. On other systems, some relinking may be necessary. Please refer to the documentation of the memory allocator library of your choice.

Since InnoDB cannot track all memory use when the system memory allocator is used (innodb_use_sys_malloc is ON), the section BUFFER POOL AND MEMORY in the output of the SHOW ENGINE INNODB STATUS command only includes the buffer pool statistics in the Total memory allocated. Any memory allocated using the mem subsystem or using ut_malloc is excluded.

Note

innodb_use_sys_malloc and innodb_additional_mem_pool_size were deprecated in MySQL 5.6 and removed in MySQL 5.7.

For more information about the performance implications of InnoDB memory usage, see Section 9.10, “Buffering and Caching”.

15.6.5 Configuring InnoDB Change Buffering

When INSERT, UPDATE, and DELETE operations are performed on a table, the values of indexed columns (particularly the values of secondary keys) are often in an unsorted order, requiring substantial I/O to bring secondary indexes up to date. InnoDB has a change buffer that caches changes to secondary index entries when the relevant page is not in the buffer pool, thus avoiding expensive I/O operations by not immediately reading in the page from disk. The buffered changes are merged when the page is loaded to the buffer pool, and the updated page is later flushed to disk. The InnoDB main thread merges buffered changes when the server is nearly idle, and during a slow shutdown.

Because it can result in fewer disk reads and writes, the change buffer feature is most valuable for workloads that are I/O-bound, for example applications with a high volume of DML operations such as bulk inserts.

However, the change buffer occupies a part of the buffer pool, reducing the memory available to cache data pages. If the working set almost fits in the buffer pool, or if your tables have relatively few secondary indexes, it may be useful to disable change buffering. If the working set fits entirely within the buffer, change buffering does not impose extra overhead, because it only applies to pages that are not in the buffer pool.

You can control the extent to which InnoDB performs change buffering using the innodb_change_buffering configuration parameter. You can enable or disable buffering for inserts, delete operations (when index records are initially marked for deletion) and purge operations (when index records are physically deleted). An update operation is a combination of an insert and a delete. The default innodb_change_buffering value is all.

Permitted innodb_change_buffering values include:

  • all

    The default value: buffer inserts, delete-marking operations, and purges.

  • none

    Do not buffer any operations.

  • inserts

    Buffer insert operations.

  • deletes

    Buffer delete-marking operations.

  • changes

    Buffer both inserts and delete-marking operations.

  • purges

    Buffer the physical deletion operations that happen in the background.

You can set the innodb_change_buffering parameter in the MySQL option file (my.cnf or my.ini) or change it dynamically with the SET GLOBAL command, which requires the SUPER privilege. Changing the setting affects the buffering of new operations; the merging of existing buffered entries is not affected.

For related information, see Section 15.4.2, “Change Buffer”. For information about configuring change buffer size, see Section 15.6.5.1, “Configuring the Change Buffer Maximum Size”.

15.6.5.1 Configuring the Change Buffer Maximum Size

As of MySQL 5.6.2, the innodb_change_buffer_max_size configuration option allows you to configure the maximum size of the change buffer as a percentage of the total size of the buffer pool. By default, innodb_change_buffer_max_size is set to 25. The maximum setting is 50.

You might consider increasing innodb_change_buffer_max_size on a MySQL server with heavy insert, update, and delete activity, where change buffer merging does not keep pace with new change buffer entries, causing the change buffer to reach its maximum size limit.

You might consider decreasing innodb_change_buffer_max_size on a MySQL server with static data used for reporting, or if the change buffer consumes too much of the memory space that is shared with the buffer pool, causing pages to age out of the buffer pool sooner than desired.

Test different settings with a representative workload to determine an optimal configuration. The innodb_change_buffer_max_size setting is dynamic, which allows you modify the setting without restarting the server.

15.6.6 Configuring Thread Concurrency for InnoDB

InnoDB uses operating system threads to process requests from user transactions. (Transactions may issue many requests to InnoDB before they commit or roll back.) On modern operating systems and servers with multi-core processors, where context switching is efficient, most workloads run well without any limit on the number of concurrent threads. Scalability improvements in MySQL 5.5 and up reduce the need to limit the number of concurrently executing threads inside InnoDB.

In situations where it is helpful to minimize context switching between threads, InnoDB can use a number of techniques to limit the number of concurrently executing operating system threads (and thus the number of requests that are processed at any one time). When InnoDB receives a new request from a user session, if the number of threads concurrently executing is at a pre-defined limit, the new request sleeps for a short time before it tries again. A request that cannot be rescheduled after the sleep is put in a first-in/first-out queue and eventually is processed. Threads waiting for locks are not counted in the number of concurrently executing threads.

You can limit the number of concurrent threads by setting the configuration parameter innodb_thread_concurrency. Once the number of executing threads reaches this limit, additional threads sleep for a number of microseconds, set by the configuration parameter innodb_thread_sleep_delay, before being placed into the queue.

Previously, it required experimentation to find the optimal value for innodb_thread_sleep_delay, and the optimal value could change depending on the workload. In MySQL 5.6.3 and higher, you can set the configuration option innodb_adaptive_max_sleep_delay to the highest value you would allow for innodb_thread_sleep_delay, and InnoDB automatically adjusts innodb_thread_sleep_delay up or down depending on the current thread-scheduling activity. This dynamic adjustment helps the thread scheduling mechanism to work smoothly during times when the system is lightly loaded and when it is operating near full capacity.

The default value for innodb_thread_concurrency and the implied default limit on the number of concurrent threads has been changed in various releases of MySQL and InnoDB. The default value of innodb_thread_concurrency is 0, so that by default there is no limit on the number of concurrently executing threads.

InnoDB causes threads to sleep only when the number of concurrent threads is limited. When there is no limit on the number of threads, all contend equally to be scheduled. That is, if innodb_thread_concurrency is 0, the value of innodb_thread_sleep_delay is ignored.

When there is a limit on the number of threads (when innodb_thread_concurrency is > 0), InnoDB reduces context switching overhead by permitting multiple requests made during the execution of a single SQL statement to enter InnoDB without observing the limit set by innodb_thread_concurrency. Since an SQL statement (such as a join) may comprise multiple row operations within InnoDB, InnoDB assigns a specified number of tickets that allow a thread to be scheduled repeatedly with minimal overhead.

When a new SQL statement starts, a thread has no tickets, and it must observe innodb_thread_concurrency. Once the thread is entitled to enter InnoDB, it is assigned a number of tickets that it can use for subsequently entering InnoDB to perform row operations. If the tickets run out, the thread is evicted, and innodb_thread_concurrency is observed again which may place the thread back into the first-in/first-out queue of waiting threads. When the thread is once again entitled to enter InnoDB, tickets are assigned again. The number of tickets assigned is specified by the global option innodb_concurrency_tickets, which is 5000 by default. A thread that is waiting for a lock is given one ticket once the lock becomes available.

The correct values of these variables depend on your environment and workload. Try a range of different values to determine what value works for your applications. Before limiting the number of concurrently executing threads, review configuration options that may improve the performance of InnoDB on multi-core and multi-processor computers, such as innodb_adaptive_hash_index.

For general performance information about MySQL thread handling, see Section 9.12.5.1, “How MySQL Uses Threads for Client Connections”.

15.6.7 Configuring the Number of Background InnoDB I/O Threads

InnoDB uses background threads to service various types of I/O requests. You can configure the number of background threads that service read and write I/O on data pages using the innodb_read_io_threads and innodb_write_io_threads configuration parameters. These parameters signify the number of background threads used for read and write requests, respectively. They are effective on all supported platforms. You can set values for these parameters in the MySQL option file (my.cnf or my.ini); you cannot change values dynamically. The default value for these parameters is 4 and permissible values range from 1-64.

The purpose of these configuration options to make InnoDB more scalable on high end systems. Each background thread can handle up to 256 pending I/O requests. A major source of background I/O is read-ahead requests. InnoDB tries to balance the load of incoming requests in such way that most background threads share work equally. InnoDB also attempts to allocate read requests from the same extent to the same thread, to increase the chances of coalescing the requests. If you have a high end I/O subsystem and you see more than 64 × innodb_read_io_threads pending read requests in SHOW ENGINE INNODB STATUS output, you might improve performance by increasing the value of innodb_read_io_threads.

On Linux systems, InnoDB uses the asynchronous I/O subsystem by default to perform read-ahead and write requests for data file pages, which changes the way that InnoDB background threads service these types of I/O requests. For more information, see Section 15.6.8, “Using Asynchronous I/O on Linux”.

For more information about InnoDB I/O performance, see Section 9.5.8, “Optimizing InnoDB Disk I/O”.

15.6.8 Using Asynchronous I/O on Linux

InnoDB uses the asynchronous I/O subsystem (native AIO) on Linux to perform readahead and write requests for data file pages. This behavior is controlled by the innodb_use_native_aio configuration option, which applies to Linux systems only and is enabled by default. On other Unix-like systems, InnoDB uses synchronous I/O only. Historically, InnoDB only used asynchronous I/O on Windows systems. Using the asynchronous I/O subsystem on Linux requires the libaio library.

With synchronous I/O, query threads queue I/O requests, and InnoDB background threads retrieve the queued requests one at a time, issuing a synchronous I/O call for each. When an I/O request is completed and the I/O call returns, the InnoDB background thread that is handling the request calls an I/O completion routine and returns to process the next request. The number of requests that can be processed in parallel is n, where n is the number of InnoDB background threads. The number of InnoDB background threads is controlled by innodb_read_io_threads and innodb_write_io_threads. See Section 15.6.7, “Configuring the Number of Background InnoDB I/O Threads”.

With native AIO, query threads dispatch I/O requests directly to the operating system, thereby removing the limit imposed by the number of background threads. InnoDB background threads wait for I/O events to signal completed requests. When a request is completed, a background thread calls an I/O completion routine and resumes waiting for I/O events.

The advantage of native AIO is scalability for heavily I/O-bound systems that typically show many pending reads/writes in SHOW ENGINE INNODB STATUS\G output. The increase in parallel processing when using native AIO means that the type of I/O scheduler or properties of the disk array controller have a greater influence on I/O performance.

A potential disadvantage of native AIO is less control over the number of I/O requests dispatched to the operating system at once. Too many I/O requests dispatched to the operating system for parallel processing could adversely affect disk read response times.

If a problem with the asynchronous I/O subsystem in the OS prevents InnoDB from starting, you can start the server with innodb_use_native_aio=0. This option may also be disabled automatically during startup if InnoDB detects a potential problem such as a combination of tmpdir location, tmpfs file system, and Linux kernel that does not support asynchronous I/O on tmpfs.

15.6.9 Configuring the InnoDB Master Thread I/O Rate

The master thread in InnoDB is a thread that performs various tasks in the background. Most of these tasks are I/O related, such as flushing dirty pages from the buffer pool or writing changes from the insert buffer to the appropriate secondary indexes. The master thread attempts to perform these tasks in a way that does not adversely affect the normal working of the server. It tries to estimate the free I/O bandwidth available and tune its activities to take advantage of this free capacity. Historically, InnoDB has used a hard coded value of 100 IOPs (input/output operations per second) as the total I/O capacity of the server.

The parameter innodb_io_capacity indicates the overall I/O capacity available to InnoDB. This parameter should be set to approximately the number of I/O operations that the system can perform per second. The value depends on your system configuration. When innodb_io_capacity is set, the master threads estimates the I/O bandwidth available for background tasks based on the set value. Setting the value to 100 reverts to the old behavior.

You can set the value of innodb_io_capacity to any number 100 or greater. The default value is 200, reflecting that the performance of typical modern I/O devices is higher than in the early days of MySQL. Typically, values around the previous default of 100 are appropriate for consumer-level storage devices, such as hard drives up to 7200 RPMs. Faster hard drives, RAID configurations, and SSDs benefit from higher values.

The innodb_io_capacity setting is a total limit for all buffer pool instances. When dirty pages are flushed, the innodb_io_capacity limit is divided equally among buffer pool instances. For more information, see the innodb_io_capacity system variable description.

You can set the value of this parameter in the MySQL option file (my.cnf or my.ini) or change it dynamically with the SET GLOBAL command, which requires the SUPER privilege.

The innodb_flush_sync configuration option causes the innodb_io_capacity setting to be ignored during bursts of I/O activity that occur at checkpoints. innodb_flush_sync is enabled by default.

Formerly, the InnoDB master thread also performed any needed purge operations. In MySQL 5.6.5 and higher, those I/O operations are moved to other background threads, whose number is controlled by the innodb_purge_threads configuration option.

For more information about InnoDB I/O performance, see Section 9.5.8, “Optimizing InnoDB Disk I/O”.

15.6.10 Configuring Spin Lock Polling

Many InnoDB mutexes and rw-locks are reserved for a short time. On a multi-core system, it can be more efficient for a thread to continuously check if it can acquire a mutex or rw-lock for a while before sleeping. If the mutex or rw-lock becomes available during this polling period, the thread can continue immediately, in the same time slice. However, too-frequent polling by multiple threads of a shared object can cause cache ping pong, different processors invalidating portions of each others' cache. InnoDB minimizes this issue by waiting a random time between subsequent polls. The delay is implemented as a busy loop.

You can control the maximum delay between testing a mutex or rw-lock using the parameter innodb_spin_wait_delay. The duration of the delay loop depends on the C compiler and the target processor. (In the 100MHz Pentium era, the unit of delay was one microsecond.) On a system where all processor cores share a fast cache memory, you might reduce the maximum delay or disable the busy loop altogether by setting innodb_spin_wait_delay=0. On a system with multiple processor chips, the effect of cache invalidation can be more significant and you might increase the maximum delay.

The default value of innodb_spin_wait_delay is 6. The spin wait delay is a dynamic, global parameter that you can specify in the MySQL option file (my.cnf or my.ini) or change at runtime with the command SET GLOBAL innodb_spin_wait_delay=delay, where delay is the desired maximum delay. Changing the setting requires the SUPER privilege.

For performance considerations for InnoDB locking operations, see Section 9.11, “Optimizing Locking Operations”.

15.6.11 Configuring InnoDB Purge Scheduling

The purge operations (a type of garbage collection) that InnoDB performs automatically may be performed by one or more separate threads rather than as part of the master thread. The use of separate threads improves scalability by allowing the main database operations to run independently from maintenance work happening in the background.

To control this feature, increase the value of the configuration option innodb_purge_threads. If DML action is concentrated on a single table or a few tables, keep the setting low so that the threads do not contend with each other for access to the busy tables. If DML operations are spread across many tables, increase the setting. Its maximum is 32.

There is another related configuration option, innodb_purge_batch_size with a default value of 300 and maximum value of 5000. This option is mainly intended for experimentation and tuning of purge operations, and should not be interesting to typical users.

For more information about InnoDB I/O performance, see Section 9.5.8, “Optimizing InnoDB Disk I/O”.

15.6.12 Configuring Optimizer Statistics for InnoDB

This section describes how to configure persistent and non-persistent optimizer statistics for InnoDB tables.

Persistent optimizer statistics are persisted across server restarts, allowing for greater plan stability and more consistent query performance. Persistent optimizer statistics also provide control and flexibility with these additional benefits:

  • You can use the innodb_stats_auto_recalc configuration option to control whether statistics are updated automatically after substantial changes to a table.

  • You can use the STATS_PERSISTENT, STATS_AUTO_RECALC, and STATS_SAMPLE_PAGES clauses with CREATE TABLE and ALTER TABLE statements to configure optimizer statistics for individual tables.

  • You can query optimizer statistics data in the mysql.innodb_table_stats and mysql.innodb_index_stats tables.

  • You can view the last_update column of the mysql.innodb_table_stats and mysql.innodb_index_stats tables to see when statistics were last updated.

  • You can manually modify the mysql.innodb_table_stats and mysql.innodb_index_stats tables to force a specific query optimization plan or to test alternative plans without modifying the database.

The persistent optimizer statistics feature is enabled by default (innodb_stats_persistent=ON).

Non-persistent optimizer statistics are cleared on each server restart and after some other operations, and recomputed on the next table access. As a result, different estimates could be produced when recomputing statistics, leading to different choices in execution plans and variations in query performance.

This section also provides information about estimating ANALYZE TABLE complexity, which may be useful when attempting to achieve a balance between accurate statistics and ANALYZE TABLE execution time.

15.6.12.1 Configuring Persistent Optimizer Statistics Parameters

The persistent optimizer statistics feature improves plan stability by storing statistics to disk and making them persistent across server restarts so that the optimizer is more likely to make consistent choices each time for a given query.

Optimizer statistics are persisted to disk when innodb_stats_persistent=ON or when individual tables are created or altered with STATS_PERSISTENT=1. innodb_stats_persistent is enabled by default.

Formerly, optimizer statistics were cleared on each server restart and after some other operations, and recomputed on the next table access. Consequently, different estimates could be produced when recalculating statistics, leading to different choices in query execution plans and thus variations in query performance.

Persistent statistics are stored in the mysql.innodb_table_stats and mysql.innodb_index_stats tables, as described in Section 15.6.12.1.5, “InnoDB Persistent Statistics Tables”.

To revert to using non-persistent optimizer statistics, you can modify tables using an ALTER TABLE tbl_name STATS_PERSISTENT=0 statement. For related information, see Section 15.6.12.2, “Configuring Non-Persistent Optimizer Statistics Parameters”

15.6.12.1.1 Configuring Automatic Statistics Calculation for Persistent Optimizer Statistics

The innodb_stats_auto_recalc configuration option, which is enabled by default, determines whether statistics are calculated automatically whenever a table undergoes substantial changes (to more than 10% of the rows). You can also configure automatic statistics recalculation for individual tables using a STATS_AUTO_RECALC clause in a CREATE TABLE or ALTER TABLE statement. innodb_stats_auto_recalc is enabled by default.

Because of the asynchronous nature of automatic statistics recalculation (which occurs in the background), statistics may not be recalculated instantly after running a DML operation that affects more than 10% of a table, even when innodb_stats_auto_recalc is enabled. In some cases, statistics recalculation may be delayed by a few seconds. If up-to-date statistics are required immediately after changing significant portions of a table, run ANALYZE TABLE to initiate a synchronous (foreground) recalculation of statistics.

If innodb_stats_auto_recalc is disabled, ensure the accuracy of optimizer statistics by issuing the ANALYZE TABLE statement for each applicable table after making substantial changes to indexed columns. You might run this statement in your setup scripts after representative data has been loaded into the table, and run it periodically after DML operations significantly change the contents of indexed columns, or on a schedule at times of low activity. When a new index is added to an existing table, index statistics are calculated and added to the innodb_index_stats table regardless of the value of innodb_stats_auto_recalc.

Caution

To ensure statistics are gathered when a new index is created, either enable the innodb_stats_auto_recalc option, or run ANALYZE TABLE after creating each new index when the persistent statistics mode is enabled.

15.6.12.1.2 Configuring Optimizer Statistics Parameters for Individual Tables

innodb_stats_persistent, innodb_stats_auto_recalc, and innodb_stats_persistent_sample_pages are global configuration options. To override these system-wide settings and configure optimizer statistics parameters for individual tables, you can define STATS_PERSISTENT, STATS_AUTO_RECALC, and STATS_SAMPLE_PAGES clauses in CREATE TABLE or ALTER TABLE statements.

  • STATS_PERSISTENT specifies whether to enable persistent statistics for an InnoDB table. The value DEFAULT causes the persistent statistics setting for the table to be determined by the innodb_stats_persistent configuration option. The value 1 enables persistent statistics for the table, while the value 0 turns off this feature. After enabling persistent statistics through a CREATE TABLE or ALTER TABLE statement, issue an ANALYZE TABLE statement to calculate the statistics, after loading representative data into the table.

  • STATS_AUTO_RECALC specifies whether to automatically recalculate persistent statistics for an InnoDB table. The value DEFAULT causes the persistent statistics setting for the table to be determined by the innodb_stats_auto_recalc configuration option. The value 1 causes statistics to be recalculated when 10% of the data in the table has changed. The value 0 prevents automatic recalculation for this table; with this setting, issue an ANALYZE TABLE statement to recalculate the statistics after making substantial changes to the table.

  • STATS_SAMPLE_PAGES specifies the number of index pages to sample when estimating cardinality and other statistics for an indexed column, such as those calculated by ANALYZE TABLE.

All three clauses are specified in the following CREATE TABLE example:

CREATE TABLE `t1` (
`id` int(8) NOT NULL auto_increment,
`data` varchar(255),
`date` datetime,
PRIMARY KEY  (`id`),
INDEX `DATE_IX` (`date`)
) ENGINE=InnoDB,
  STATS_PERSISTENT=1,
  STATS_AUTO_RECALC=1,
STATS_SAMPLE_PAGES=25;
15.6.12.1.3 Configuring the Number of Sampled Pages for InnoDB Optimizer Statistics

The MySQL query optimizer uses estimated statistics about key distributions to choose the indexes for an execution plan, based on the relative selectivity of the index. Operations such as ANALYZE TABLE cause InnoDB to sample random pages from each index on a table to estimate the cardinality of the index. (This technique is known as random dives.)

To give you control over the quality of the statistics estimate (and thus better information for the query optimizer), you can change the number of sampled pages using the parameter innodb_stats_persistent_sample_pages, which can be set at runtime.

innodb_stats_persistent_sample_pages has a default value of 20. As a general guideline, consider modifying this parameter when encountering the following issues:

  1. Statistics are not accurate enough and the optimizer chooses suboptimal plans, as shown by EXPLAIN output. The accuracy of statistics can be checked by comparing the actual cardinality of an index (as returned by running SELECT DISTINCT on the index columns) with the estimates provided in the mysql.innodb_index_stats persistent statistics table.

    If it is determined that statistics are not accurate enough, the value of innodb_stats_persistent_sample_pages should be increased until the statistics estimates are sufficiently accurate. Increasing innodb_stats_persistent_sample_pages too much, however, could cause ANALYZE TABLE to run slowly.

  2. ANALYZE TABLE is too slow. In this case innodb_stats_persistent_sample_pages should be decreased until ANALYZE TABLE execution time is acceptable. Decreasing the value too much, however, could lead to the first problem of inaccurate statistics and suboptimal query execution plans.

    If a balance cannot be achieved between accurate statistics and ANALYZE TABLE execution time, consider decreasing the number of indexed columns in the table or limiting the number of partitions to reduce ANALYZE TABLE complexity. The number of columns in the table's primary key is also important to consider, as primary key columns are appended to each non-unique index.

    For related information, see Section 15.6.12.3, “Estimating ANALYZE TABLE Complexity for InnoDB Tables”.

15.6.12.1.4 Including Delete-marked Records in Persistent Statistics Calculations

By default, InnoDB reads uncommitted data when calculating statistics. In the case of an uncommitted transaction that deletes rows from a table, InnoDB excludes records that are delete-marked when calculating row estimates and index statistics, which can lead to non-optimal execution plans for other transactions that are operating on the table concurrently using a transaction isolation level other than READ UNCOMMITTED. To avoid this scenario, innodb_stats_include_delete_marked can be enabled to ensure that InnoDB includes delete-marked records when calculating persistent optimizer statistics.

When innodb_stats_include_delete_marked is enabled, ANALYZE TABLE considers delete-marked records when recalculating statistics.

innodb_stats_include_delete_marked is a global setting that affects all InnoDB tables, and it is only applicable to persistent optimizer statistics.

innodb_stats_include_delete_marked was introduced in MySQL 5.7.16.

15.6.12.1.5 InnoDB Persistent Statistics Tables

The persistent statistics feature relies on the internally managed tables in the mysql database, named innodb_table_stats and innodb_index_stats. These tables are set up automatically in all install, upgrade, and build-from-source procedures.

Table 15.3 Columns of innodb_table_stats

Column nameDescription
database_nameDatabase name
table_nameTable name, partition name, or subpartition name
last_updateA timestamp indicating the last time that InnoDB updated this row
n_rowsThe number of rows in the table
clustered_index_sizeThe size of the primary index, in pages
sum_of_other_index_sizesThe total size of other (non-primary) indexes, in pages

Table 15.4 Columns of innodb_index_stats

Column nameDescription
database_nameDatabase name
table_nameTable name, partition name, or subpartition name
index_nameIndex name
last_updateA timestamp indicating the last time that InnoDB updated this row
stat_nameThe name of the statistic, whose value is reported in the stat_value column
stat_valueThe value of the statistic that is named in stat_name column
sample_sizeThe number of pages sampled for the estimate provided in the stat_value column
stat_descriptionDescription of the statistic that is named in the stat_name column

Both the innodb_table_stats and innodb_index_stats tables include a last_update column showing when InnoDB last updated index statistics, as shown in the following example:

mysql> select * from innodb_table_stats \G
*************************** 1. row ***************************
           database_name: sakila
              table_name: actor
             last_update: 2014-05-28 16:16:44
                  n_rows: 200
    clustered_index_size: 1
sum_of_other_index_sizes: 1
...
mysql> select * from innodb_index_stats \G
*************************** 1. row ***************************
   database_name: sakila
      table_name: actor
      index_name: PRIMARY
     last_update: 2014-05-28 16:16:44
       stat_name: n_diff_pfx01
      stat_value: 200
     sample_size: 1
     ...

The innodb_table_stats and innodb_index_stats tables are ordinary tables and can be updated manually. The ability to update statistics manually makes it possible to force a specific query optimization plan or test alternative plans without modifying the database. If you manually update statistics, issue the FLUSH TABLE tbl_name command to make MySQL reload the updated statistics.

15.6.12.1.6 InnoDB Persistent Statistics Tables Example

The innodb_table_stats table contains one row per table. The data collected is demonstrated in the following example.

Table t1 contains a primary index (columns a, b) secondary index (columns c, d), and unique index (columns e, f):

CREATE TABLE t1 (
a INT, b INT, c INT, d INT, e INT, f INT,
PRIMARY KEY (a, b), KEY i1 (c, d), UNIQUE KEY i2uniq (e, f)
) ENGINE=INNODB;

After inserting five rows of sample data, the table appears as follows:

mysql> SELECT * FROM t1;
+---+---+------+------+------+------+
| a | b | c    | d    | e    | f    |
+---+---+------+------+------+------+
| 1 | 1 |   10 |   11 |  100 |  101 |
| 1 | 2 |   10 |   11 |  200 |  102 |
| 1 | 3 |   10 |   11 |  100 |  103 |
| 1 | 4 |   10 |   12 |  200 |  104 |
| 1 | 5 |   10 |   12 |  100 |  105 |
+---+---+------+------+------+------+

To immediately update statistics, run ANALYZE TABLE (if innodb_stats_auto_recalc is enabled, statistics are updated automatically within a few seconds assuming that the 10% threshold for changed table rows is reached):

mysql> ANALYZE TABLE t1;
+---------+---------+----------+----------+
| Table   | Op      | Msg_type | Msg_text |
+---------+---------+----------+----------+
| test.t1 | analyze | status   | OK       |
+---------+---------+----------+----------+

Table statistics for table t1 show the last time InnoDB updated the table statistics (2014-03-14 14:36:34), the number of rows in the table (5), the clustered index size (1 page), and the combined size of the other indexes (2 pages).

mysql> SELECT * FROM mysql.innodb_table_stats WHERE table_name like 't1'\G
*************************** 1. row ***************************
           database_name: test
              table_name: t1
             last_update: 2014-03-14 14:36:34
                  n_rows: 5
    clustered_index_size: 1
sum_of_other_index_sizes: 2

The innodb_index_stats table contains multiple rows for each index. Each row in the innodb_index_stats table provides data related to a particular index statistic which is named in the stat_name column and described in the stat_description column. For example:

mysql> SELECT index_name, stat_name, stat_value, stat_description
    -> FROM mysql.innodb_index_stats WHERE table_name like 't1';
+------------+--------------+------------+-----------------------------------+
| index_name | stat_name    | stat_value | stat_description                  |
+------------+--------------+------------+-----------------------------------+
| PRIMARY    | n_diff_pfx01 |          1 | a                                 |
| PRIMARY    | n_diff_pfx02 |          5 | a,b                               |
| PRIMARY    | n_leaf_pages |          1 | Number of leaf pages in the index |
| PRIMARY    | size         |          1 | Number of pages in the index      |
| i1         | n_diff_pfx01 |          1 | c                                 |
| i1         | n_diff_pfx02 |          2 | c,d                               |
| i1         | n_diff_pfx03 |          2 | c,d,a                             |
| i1         | n_diff_pfx04 |          5 | c,d,a,b                           |
| i1         | n_leaf_pages |          1 | Number of leaf pages in the index |
| i1         | size         |          1 | Number of pages in the index      |
| i2uniq     | n_diff_pfx01 |          2 | e                                 |
| i2uniq     | n_diff_pfx02 |          5 | e,f                               |
| i2uniq     | n_leaf_pages |          1 | Number of leaf pages in the index |
| i2uniq     | size         |          1 | Number of pages in the index      |
+------------+--------------+------------+-----------------------------------+

The stat_name column shows the following types of statistics:

  • size: Where stat_name=size, the stat_value column displays the total number of pages in the index.

  • n_leaf_pages: Where stat_name=n_leaf_pages, the stat_value column displays the number of leaf pages in the index.

  • n_diff_pfxNN: Where stat_name=n_diff_pfx01, the stat_value column displays the number of distinct values in the first column of the index. Where stat_name=n_diff_pfx02, the stat_value column displays the number of distinct values in the first two columns of the index, and so on. Additionally, where stat_name=n_diff_pfxNN, the stat_description column shows a comma separated list of the index columns that are counted.

To further illustrate the n_diff_pfxNN statistic, which provides cardinality data, consider the t1 table example. As shown below, the t1 table is created with a primary index (columns a, b), a secondary index (columns c, d), and a unique index (columns e, f):

CREATE TABLE t1 (
  a INT, b INT, c INT, d INT, e INT, f INT,
  PRIMARY KEY (a, b), KEY i1 (c, d), UNIQUE KEY i2uniq (e, f)
) ENGINE=INNODB;

After inserting five rows of sample data, the table appears as follows:

mysql> SELECT * FROM t1;
+---+---+------+------+------+------+
| a | b | c    | d    | e    | f    |
+---+---+------+------+------+------+
| 1 | 1 |   10 |   11 |  100 |  101 |
| 1 | 2 |   10 |   11 |  200 |  102 |
| 1 | 3 |   10 |   11 |  100 |  103 |
| 1 | 4 |   10 |   12 |  200 |  104 |
| 1 | 5 |   10 |   12 |  100 |  105 |
+---+---+------+------+------+------+

When you query the index_name, stat_name, stat_value, and stat_description where stat_name LIKE 'n_diff%', the following result set is returned:

mysql> SELECT index_name, stat_name, stat_value, stat_description
    -> FROM mysql.innodb_index_stats
    -> WHERE table_name like 't1' AND stat_name LIKE 'n_diff%';
+------------+--------------+------------+------------------+
| index_name | stat_name    | stat_value | stat_description |
+------------+--------------+------------+------------------+
| PRIMARY    | n_diff_pfx01 |          1 | a                |
| PRIMARY    | n_diff_pfx02 |          5 | a,b              |
| i1         | n_diff_pfx01 |          1 | c                |
| i1         | n_diff_pfx02 |          2 | c,d              |
| i1         | n_diff_pfx03 |          2 | c,d,a            |
| i1         | n_diff_pfx04 |          5 | c,d,a,b          |
| i2uniq     | n_diff_pfx01 |          2 | e                |
| i2uniq     | n_diff_pfx02 |          5 | e,f              |
+------------+--------------+------------+------------------+

For the PRIMARY index, there are two n_diff% rows. The number of rows is equal to the number of columns in the index.

Note

For non-unique indexes, InnoDB appends the columns of the primary key.

  • Where index_name=PRIMARY and stat_name=n_diff_pfx01, the stat_value is 1, which indicates that there is a single distinct value in the first column of the index (column a). The number of distinct values in column a is confirmed by viewing the data in column a in table t1, in which there is a single distinct value (1). The counted column (a) is shown in the stat_description column of the result set.

  • Where index_name=PRIMARY and stat_name=n_diff_pfx02, the stat_value is 5, which indicates that there are five distinct values in the two columns of the index (a,b). The number of distinct values in columns a and b is confirmed by viewing the data in columns a and b in table t1, in which there are five distinct values: (1,1), (1,2), (1,3), (1,4) and (1,5). The counted columns (a,b) are shown in the stat_description column of the result set.

For the secondary index (i1), there are four n_diff% rows. Only two columns are defined for the secondary index (c,d) but there are four n_diff% rows for the secondary index because InnoDB suffixes all non-unique indexes with the primary key. As a result, there are four n_diff% rows instead of two to account for the both the secondary index columns (c,d) and the primary key columns (a,b).

  • Where index_name=i1 and stat_name=n_diff_pfx01, the stat_value is 1, which indicates that there is a single distinct value in the first column of the index (column c). The number of distinct values in column c is confirmed by viewing the data in column c in table t1, in which there is a single distinct value: (10). The counted column (c) is shown in the stat_description column of the result set.

  • Where index_name=i1 and stat_name=n_diff_pfx02, the stat_value is 2, which indicates that there are two distinct values in the first two columns of the index (c,d). The number of distinct values in columns c an d is confirmed by viewing the data in columns c and d in table t1, in which there are two distinct values: (10,11) and (10,12). The counted columns (c,d) are shown in the stat_description column of the result set.

  • Where index_name=i1 and stat_name=n_diff_pfx03, the stat_value is 2, which indicates that there are two distinct values in the first three columns of the index (c,d,a). The number of distinct values in columns c, d, and a is confirmed by viewing the data in column c, d, and a in table t1, in which there are two distinct values: (10,11,1) and (10,12,1). The counted columns (c,d,a) are shown in the stat_description column of the result set.

  • Where index_name=i1 and stat_name=n_diff_pfx04, the stat_value is 5, which indicates that there are five distinct values in the four columns of the index (c,d,a,b). The number of distinct values in columns c, d, a and b is confirmed by viewing the data in columns c, d, a, and b in table t1, in which there are five distinct values: (10,11,1,1), (10,11,1,2), (10,11,1,3), (10,12,1,4) and (10,12,1,5). The counted columns (c,d,a,b) are shown in the stat_description column of the result set.

For the unique index (i2uniq), there are two n_diff% rows.

  • Where index_name=i2uniq and stat_name=n_diff_pfx01, the stat_value is 2, which indicates that there are two distinct values in the first column of the index (column e). The number of distinct values in column e is confirmed by viewing the data in column e in table t1, in which there are two distinct values: (100) and (200). The counted column (e) is shown in the stat_description column of the result set.

  • Where index_name=i2uniq and stat_name=n_diff_pfx02, the stat_value is 5, which indicates that there are five distinct values in the two columns of the index (e,f). The number of distinct values in columns e and f is confirmed by viewing the data in columns e and f in table t1, in which there are five distinct values: (100,101), (200,102), (100,103), (200,104) and (100,105). The counted columns (e,f) are shown in the stat_description column of the result set.

15.6.12.1.7 Retrieving Index Size Using the innodb_index_stats Table

The size of indexes for tables, partitions, or subpartitions can be retrieved using the innodb_index_stats table. In the following example, index sizes are retrieved for table t1. For a definition of table t1 and corresponding index statistics, see Section 15.6.12.1.6, “InnoDB Persistent Statistics Tables Example”.

mysql> SELECT SUM(stat_value) pages, index_name,
    -> SUM(stat_value)*@@innodb_page_size size
    -> FROM mysql.innodb_index_stats WHERE table_name='t1'
    -> AND stat_name = 'size' GROUP BY index_name;
+-------+------------+-------+
| pages | index_name | size  |
+-------+------------+-------+
|     1 | PRIMARY    | 16384 |
|     1 | i1         | 16384 |
|     1 | i2uniq     | 16384 |
+-------+------------+-------+

For partitions or subpartitions, the same query with a modified WHERE clause can be used to retrieve index sizes. For example, the following query retrieves index sizes for partitions of table t1:

mysql> SELECT SUM(stat_value) pages, index_name,
    -> SUM(stat_value)*@@innodb_page_size size
    -> FROM mysql.innodb_index_stats WHERE table_name like 't1#P%'
-> AND stat_name = 'size' GROUP BY index_name;     

15.6.12.2 Configuring Non-Persistent Optimizer Statistics Parameters

This section describes how to configure non-persistent optimizer statistics. Optimizer statistics are not persisted to disk when innodb_stats_persistent=OFF or when individual tables are created or altered with STATS_PERSISTENT=0. Instead, statistics are stored in memory, and are lost when the server is shut down. Statistics are also updated periodically by certain operations and under certain conditions.

As of MySQL 5.6.6, optimizer statistics are persisted to disk by default, enabled by the innodb_stats_persistent configuration option. For information about persistent optimizer statistics, see Section 15.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”.

Optimizer Statistics Updates

Non-persistent optimizer statistics are updated when:

Configuring the Number of Sampled Pages

The MySQL query optimizer uses estimated statistics about key distributions to choose the indexes for an execution plan, based on the relative selectivity of the index. When InnoDB updates optimizer statistics, it samples random pages from each index on a table to estimate the cardinality of the index. (This technique is known as random dives.)

To give you control over the quality of the statistics estimate (and thus better information for the query optimizer), you can change the number of sampled pages using the parameter innodb_stats_transient_sample_pages. The default number of sampled pages is 8, which could be insufficient to produce an accurate estimate, leading to poor index choices by the query optimizer. This technique is especially important for large tables and tables used in joins. Unnecessary full table scans for such tables can be a substantial performance issue. See Section 9.2.1.19, “Avoiding Full Table Scans” for tips on tuning such queries. innodb_stats_transient_sample_pages is a global parameter that can be set at runtime.

The value of innodb_stats_transient_sample_pages affects the index sampling for all InnoDB tables and indexes when innodb_stats_persistent=0. Be aware of the following potentially significant impacts when you change the index sample size:

  • Small values like 1 or 2 can result in inaccurate estimates of cardinality.

  • Increasing the innodb_stats_transient_sample_pages value might require more disk reads. Values much larger than 8 (say, 100), can cause a significant slowdown in the time it takes to open a table or execute SHOW TABLE STATUS.

  • The optimizer might choose very different query plans based on different estimates of index selectivity.

Whatever value of innodb_stats_transient_sample_pages works best for a system, set the option and leave it at that value. Choose a value that results in reasonably accurate estimates for all tables in your database without requiring excessive I/O. Because the statistics are automatically recalculated at various times other than on execution of ANALYZE TABLE, it does not make sense to increase the index sample size, run ANALYZE TABLE, then decrease sample size again.

Smaller tables generally require fewer index samples than larger tables. If your database has many large tables, consider using a higher value for innodb_stats_transient_sample_pages than if you have mostly smaller tables.

15.6.12.3 Estimating ANALYZE TABLE Complexity for InnoDB Tables

ANALYZE TABLE complexity for InnoDB tables is dependent on:

  • The number of pages sampled, as defined by innodb_stats_persistent_sample_pages.

  • The number of indexed columns in a table

  • The number of partitions. If a table has no partitions, the number of partitions is considered to be 1.

Using these parameters, an approximate formula for estimating ANALYZE TABLE complexity would be:

The value of innodb_stats_persistent_sample_pages * number of indexed columns in a table * the number of partitions

Typically, the greater the resulting value, the greater the execution time for ANALYZE TABLE.

Note

innodb_stats_persistent_sample_pages defines the number of pages sampled at a global level. To set the number of pages sampled for an individual table, use the STATS_SAMPLE_PAGES option with CREATE TABLE or ALTER TABLE. For more information, see Section 15.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”.

If innodb_stats_persistent=OFF, the number of pages sampled is defined by innodb_stats_transient_sample_pages. See Section 15.6.12.2, “Configuring Non-Persistent Optimizer Statistics Parameters” for additional information.

For a more in-depth approach to estimating ANALYZE TABLE complexity, consider the following example.

In Big O notation, ANALYZE TABLE complexity is described as:

 O(n_sample
  * (n_cols_in_uniq_i
     + n_cols_in_non_uniq_i
     + n_cols_in_pk * (1 + n_non_uniq_i))
  * n_part)          

where:

  • n_sample is the number of pages sampled (defined by innodb_stats_persistent_sample_pages)

  • n_cols_in_uniq_i is total number of all columns in all unique indexes (not counting the primary key columns)

  • n_cols_in_non_uniq_i is the total number of all columns in all non-unique indexes

  • n_cols_in_pk is the number of columns in the primary key (if a primary key is not defined, InnoDB creates a single column primary key internally)

  • n_non_uniq_i is the number of non-unique indexes in the table

  • n_part is the number of partitions. If no partitions are defined, the table is considered to be a single partition.

Now, consider the following table (table t), which has a primary key (2 columns), a unique index (2 columns), and two non-unique indexes (two columns each):

 CREATE TABLE t (
  a INT,
  b INT,
  c INT,
  d INT,
  e INT,
  f INT,
  g INT,
  h INT,
  PRIMARY KEY (a, b),
  UNIQUE KEY i1uniq (c, d),
  KEY i2nonuniq (e, f),
  KEY i3nonuniq (g, h)
);    

For the column and index data required by the algorithm described above, query the mysql.innodb_index_stats persistent index statistics table for table t. The n_diff_pfx% statistics show the columns that are counted for each index. For example, columns a and b are counted for the primary key index. For the non-unique indexes, the primary key columns (a,b) are counted in addition to the user defined columns.

Note

For additional information about the InnoDB persistent statistics tables, see Section 15.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”

  SELECT index_name, stat_name, stat_description
  FROM mysql.innodb_index_stats
  WHERE
  database_name='test' AND
  table_name='t' AND
  stat_name like 'n_diff_pfx%';

  +------------+--------------+------------------+
  | index_name | stat_name    | stat_description |
  +------------+--------------+------------------+
  | PRIMARY    | n_diff_pfx01 | a                |
  | PRIMARY    | n_diff_pfx02 | a,b              |
  | i1uniq     | n_diff_pfx01 | c                |
  | i1uniq     | n_diff_pfx02 | c,d              |
  | i2nonuniq  | n_diff_pfx01 | e                |
  | i2nonuniq  | n_diff_pfx02 | e,f              |
  | i2nonuniq  | n_diff_pfx03 | e,f,a            |
  | i2nonuniq  | n_diff_pfx04 | e,f,a,b          |
  | i3nonuniq  | n_diff_pfx01 | g                |
  | i3nonuniq  | n_diff_pfx02 | g,h              |
  | i3nonuniq  | n_diff_pfx03 | g,h,a            |
  | i3nonuniq  | n_diff_pfx04 | g,h,a,b          |
  +------------+--------------+------------------+   

Based on the index statistics data shown above and the table definition, the following values can be determined:

  • n_cols_in_uniq_i, the total number of all columns in all unique indexes not counting the primary key columns, is 2 (c and d)

  • n_cols_in_non_uniq_i, the total number of all columns in all non-unique indexes, is 4 (e, f, g and h)

  • n_cols_in_pk, the number of columns in the primary key, is 2 (a and b)

  • n_non_uniq_i, the number of non-unique indexes in the table, is 2 (i2nonuniq and i3nonuniq))

  • n_part, the number of partitions, is 1.

You can now calculate innodb_stats_persistent_sample_pages * (2 + 4 + 2 * (1 + 2)) * 1 to determine the number of leaf pages that are scanned. With innodb_stats_persistent_sample_pages set to the default value of 20, and with a default page size of 16 KiB (innodb_page_size=16384), you can then estimate that 20 * 12 * 16384 bytes are read for table t, or about 4 MiB.

Note

All 4 MiB may not be read from disk, as some leaf pages may already be cached in the buffer pool.

15.6.13 Configuring the Merge Threshold for Index Pages

You can configure the MERGE_THRESHOLD value for index pages. If the page-full percentage for an index page falls below the MERGE_THRESHOLD value when a row is deleted or when a row is shortened by an UPDATE operation, InnoDB attempts to merge the index page with a neighboring index page. The default MERGE_THRESHOLD value is 50, which is the previously hardcoded value. The minimum MERGE_THRESHOLD value is 1 and the maximum value is 50.

When the page-full percentage for an index page falls below 50%, which is the default MERGE_THRESHOLD setting, InnoDB attempts to merge the index page with a neighboring page. If both pages are close to 50% full, a page split can occur soon after the pages are merged. If this merge-split behavior occurs frequently, it can have an adverse affect on performance. To avoid frequent merge-splits, you can lower the MERGE_THRESHOLD value so that InnoDB attempts page merges at a lower page-full percentage. Merging pages at a lower page-full percentage leaves more room in index pages and helps reduce merge-split behavior.

The MERGE_THRESHOLD for index pages can be defined for a table or for individual indexes. A MERGE_THRESHOLD value defined for an individual index takes priority over a MERGE_THRESHOLD value defined for the table. If undefined, the MERGE_THRESHOLD value defaults to 50.

Setting MERGE_THRESHOLD for a Table

You can set the MERGE_THRESHOLD value for a table using the table_option COMMENT clause of the CREATE TABLE statement. For example:

CREATE TABLE t1 (
   id INT,
  KEY id_index (id)
) COMMENT='MERGE_THRESHOLD=45';

You can also set the MERGE_THRESHOLD value for an existing table using the table_option COMMENT clause with ALTER TABLE:

CREATE TABLE t1 (
   id INT,
  KEY id_index (id)
);

ALTER TABLE t1 COMMENT='MERGE_THRESHOLD=40';    

Setting MERGE_THRESHOLD for Individual Indexes

To set the MERGE_THRESHOLD value for an individual index, you can use the index_option COMMENT clause with CREATE TABLE, ALTER TABLE, or CREATE INDEX, as shown in the following examples:

  • Setting MERGE_THRESHOLD for an individual index using CREATE TABLE:

    CREATE TABLE t1 (
       id INT,
      KEY id_index (id) COMMENT 'MERGE_THRESHOLD=40'
    );
  • Setting MERGE_THRESHOLD for an individual index using ALTER TABLE:

    CREATE TABLE t1 (
       id INT,
      KEY id_index (id)
    );
    
    ALTER TABLE t1 DROP KEY id_index;
    ALTER TABLE t1 ADD KEY id_index (id) COMMENT 'MERGE_THRESHOLD=40';
  • Setting MERGE_THRESHOLD for an individual index using CREATE INDEX:

    CREATE TABLE t1 (id INT);
    CREATE INDEX id_index ON t1 (id) COMMENT 'MERGE_THRESHOLD=40';
Note

You cannot modify the MERGE_THRESHOLD value at the index level for GEN_CLUST_INDEX, which is the clustered index created by InnoDB when an InnoDB table is created without a primary key or unique key index. You can only modify the MERGE_THRESHOLD value for GEN_CLUST_INDEX by setting MERGE_THRESHOLD for the table.

Querying the MERGE_THRESHOLD Value for an Index

The current MERGE_THRESHOLD value for an index can be obtained by querying the INNODB_SYS_INDEXES table. For example:

mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_INDEXES WHERE NAME='id_index' \G
*************************** 1. row ***************************
       INDEX_ID: 91
           NAME: id_index
       TABLE_ID: 68
           TYPE: 0
       N_FIELDS: 1
        PAGE_NO: 4
          SPACE: 57
MERGE_THRESHOLD: 40

You can use SHOW CREATE TABLE to view the MERGE_THRESHOLD value for a table, if explicitly defined using the table_option COMMENT clause:

mysql> SHOW CREATE TABLE t2 \G
*************************** 1. row ***************************
       Table: t2
Create Table: CREATE TABLE `t2` (
  `id` int(11) DEFAULT NULL,
  KEY `id_index` (`id`) COMMENT 'MERGE_THRESHOLD=40'
) ENGINE=InnoDB DEFAULT CHARSET=latin1
Note

A MERGE_THRESHOLD value defined at the index level takes priority over a MERGE_THRESHOLD value defined for the table. If undefined, MERGE_THRESHOLD defaults to 50% (MERGE_THRESHOLD=50, which is the previously hardcoded value.

Likewise, you can use SHOW INDEX to view the MERGE_THRESHOLD value for an index, if explicitly defined using the index_option COMMENT clause:

mysql> SHOW INDEX FROM t2 \G
*************************** 1. row ***************************
        Table: t2
   Non_unique: 1
     Key_name: id_index
 Seq_in_index: 1
  Column_name: id
    Collation: A
  Cardinality: 0
     Sub_part: NULL
       Packed: NULL
         Null: YES
   Index_type: BTREE
      Comment:
Index_comment: MERGE_THRESHOLD=40

Measuring the Effect of MERGE_THRESHOLD Settings

The INNODB_METRICS table provides two counters that can be used to measure the effect of a MERGE_THRESHOLD setting on index page merges.

mysql> SELECT NAME, COMMENT FROM INFORMATION_SCHEMA.INNODB_METRICS
WHERE NAME like '%index_page_merge%';
+-----------------------------+----------------------------------------+
| NAME                        | COMMENT                                |
+-----------------------------+----------------------------------------+
| index_page_merge_attempts   | Number of index page merge attempts    |
| index_page_merge_successful | Number of successful index page merges |
+-----------------------------+----------------------------------------+

When lowering the MERGE_THRESHOLD value, the objectives are:

  • A smaller number of page merge attempts and successful page merges

  • A similar number of page merge attempts and successful page merges

A MERGE_THRESHOLD setting that is too small could result in large data files due to an excessive amount of empty page space.

For information about using INNODB_METRICS counters, see Section 15.15.6, “InnoDB INFORMATION_SCHEMA Metrics Table”.

15.7 InnoDB Tablespaces

15.7.1 Resizing the InnoDB System Tablespace

This section describes how to increase or decrease the size of the InnoDB system tablespace.

Increasing the Size of the InnoDB System Tablespace

The easiest way to increase the size of the InnoDB system tablespace is to configure it from the beginning to be auto-extending. Specify the autoextend attribute for the last data file in the tablespace definition. Then InnoDB increases the size of that file automatically in 64MB increments when it runs out of space. The increment size can be changed by setting the value of the innodb_autoextend_increment system variable, which is measured in megabytes.

You can expand the system tablespace by a defined amount by adding another data file:

  1. Shut down the MySQL server.

  2. If the previous last data file is defined with the keyword autoextend, change its definition to use a fixed size, based on how large it has actually grown. Check the size of the data file, round it down to the closest multiple of 1024 × 1024 bytes (= 1MB), and specify this rounded size explicitly in innodb_data_file_path.

  3. Add a new data file to the end of innodb_data_file_path, optionally making that file auto-extending. Only the last data file in the innodb_data_file_path can be specified as auto-extending.

  4. Start the MySQL server again.

For example, this tablespace has just one auto-extending data file ibdata1:

innodb_data_home_dir =
innodb_data_file_path = /ibdata/ibdata1:10M:autoextend

Suppose that this data file, over time, has grown to 988MB. Here is the configuration line after modifying the original data file to use a fixed size and adding a new auto-extending data file:

innodb_data_home_dir =
innodb_data_file_path = /ibdata/ibdata1:988M;/disk2/ibdata2:50M:autoextend

When you add a new data file to the system tablespace configuration, make sure that the filename does not refer to an existing file. InnoDB creates and initializes the file when you restart the server.

Decreasing the Size of the InnoDB System Tablespace

You cannot remove a data file from the system tablespace. To decrease the system tablespace size, use this procedure:

  1. Use mysqldump to dump all your InnoDB tables, including InnoDB tables located in the MySQL database. As of 5.6, there are five InnoDB tables included in the MySQL database:

    mysql> select table_name from information_schema.tables where table_schema='mysql' and engine='InnoDB';
    +----------------------+
    | table_name           |
    +----------------------+
    | innodb_index_stats   |
    | innodb_table_stats   |
    | slave_master_info    |
    | slave_relay_log_info |
    | slave_worker_info    |
    +----------------------+
    5 rows in set (0.00 sec)
          
  2. Stop the server.

  3. Remove all the existing tablespace files (*.ibd), including the ibdata and ib_log files. Do not forget to remove *.ibd files for tables located in the MySQL database.

  4. Remove any .frm files for InnoDB tables.

  5. Configure a new tablespace.

  6. Restart the server.

  7. Import the dump files.

Note

If your databases only use the InnoDB engine, it may be simpler to dump all databases, stop the server, remove all databases and InnoDB log files, restart the server, and import the dump files.

15.7.2 Changing the Number or Size of InnoDB Redo Log Files

To change the number or the size of your InnoDB redo log files, perform the following steps:

  1. Stop the MySQL server and make sure that it shuts down without errors.

  2. Edit my.cnf to change the log file configuration. To change the log file size, configure innodb_log_file_size. To increase the number of log files, configure innodb_log_files_in_group.

  3. Start the MySQL server again.

If InnoDB detects that the innodb_log_file_size differs from the redo log file size, it writes a log checkpoint, closes and removes the old log files, creates new log files at the requested size, and opens the new log files.

15.7.3 Using Raw Disk Partitions for the System Tablespace

You can use raw disk partitions as data files in the InnoDB system tablespace. This technique enables nonbuffered I/O on Windows and on some Linux and Unix systems without file system overhead. Perform tests with and without raw partitions to verify whether this change actually improves performance on your system.

When you use a raw disk partition, ensure that the user ID that runs the MySQL server has read and write privileges for that partition. For example, if you run the server as the mysql user, the partition must be readable and writeable by mysql. If you run the server with the --memlock option, the server must be run as root, so the partition must be readable and writeable by root.

The procedures described below involve option file modification. For additional information, see Section 5.2.6, “Using Option Files”.

Allocating a Raw Disk Partition on Linux and Unix Systems

  1. When you create a new data file, specify the keyword newraw immediately after the data file size for the innodb_data_file_path option. The partition must be at least as large as the size that you specify. Note that 1MB in InnoDB is 1024 × 1024 bytes, whereas 1MB in disk specifications usually means 1,000,000 bytes.

    [mysqld]
    innodb_data_home_dir=
    innodb_data_file_path=/dev/hdd1:3Gnewraw;/dev/hdd2:2Gnewraw
    
  2. Restart the server. InnoDB notices the newraw keyword and initializes the new partition. However, do not create or change any InnoDB tables yet. Otherwise, when you next restart the server, InnoDB reinitializes the partition and your changes are lost. (As a safety measure InnoDB prevents users from modifying data when any partition with newraw is specified.)

  3. After InnoDB has initialized the new partition, stop the server, change newraw in the data file specification to raw:

    [mysqld]
    innodb_data_home_dir=
    innodb_data_file_path=/dev/hdd1:3Graw;/dev/hdd2:2Graw
    
  4. Restart the server. InnoDB now permits changes to be made.

Allocating a Raw Disk Partition on Windows

On Windows systems, the same steps and accompanying guidelines described for Linux and Unix systems apply except that the innodb_data_file_path setting differs slightly on Windows.

  1. When you create a new data file, specify the keyword newraw immediately after the data file size for the innodb_data_file_path option:

    [mysqld]
    innodb_data_home_dir=
    innodb_data_file_path=//./D::10Gnewraw
    

    The //./ corresponds to the Windows syntax of \\.\ for accessing physical drives. In the example above, D: is the drive letter of the partition.

  2. Restart the server. InnoDB notices the newraw keyword and initializes the new partition.

  3. After InnoDB has initialized the new partition, stop the server, change newraw in the data file specification to raw:

    [mysqld]
    innodb_data_home_dir=
    innodb_data_file_path=//./D::10Graw
    
  4. Restart the server. InnoDB now permits changes to be made.

15.7.4 InnoDB File-Per-Table Tablespaces

Historically, all InnoDB tables and indexes were stored in the system tablespace. This monolithic approach was targeted at machines dedicated entirely to database processing, with carefully planned data growth, where any disk storage allocated to MySQL would never be needed for other purposes. InnoDB's file-per-table tablespace feature provides a more flexible alternative, where each InnoDB table and its indexes are stored in a separate .ibd data file. Each such .ibd data file represents an individual tablespace. This feature is controlled by the innodb_file_per_table configuration option, which is enabled by default in MySQL 5.6.6 and higher.

Advantages of File-Per-Table Tablespaces

  • You can reclaim disk space when truncating or dropping a table stored in a file-per-table tablepace. Truncating or dropping tables stored in the shared system tablespace creates free space internally in the system tablespace data files (ibdata files) which can only be used for new InnoDB data.

    Similarly, a table-copying ALTER TABLE operation on table that resides in a shared tablespace can increase the amount of space used by the tablespace. Such operations may require as much additional space as the data in the table plus indexes. The additional space required for the table-copying ALTER TABLE operation is not released back to the operating system as it is for file-per-table tablespaces.

  • The TRUNCATE TABLE operation is faster when run on tables stored in file-per-table tablepaces.

  • You can store specific tables on separate storage devices, for I/O optimization, space management, or backup purposes. In previous releases, you had to move entire database directories to other drives and create symbolic links in the MySQL data directory, as described in Section 9.12.3, “Using Symbolic Links”. In MySQL 5.6.6 and higher, you can specify the location of each table using the syntax CREATE TABLE ... DATA DIRECTORY = absolute_path_to_directory, as explained in Section 15.7.5, “Creating a File-Per-Table Tablespace Outside the Data Directory”.

  • You can run OPTIMIZE TABLE to compact or recreate a file-per-table tablespace. When you run an OPTIMIZE TABLE, InnoDB creates a new .ibd file with a temporary name, using only the space required to store actual data. When the optimization is complete, InnoDB removes the old .ibd file and replaces it with the new one. If the previous .ibd file grew significantly but the actual data only accounted for a portion of its size, running OPTIMIZE TABLE can reclaim the unused space.

  • You can move individual InnoDB tables rather than entire databases.

  • You can copy individual InnoDB tables from one MySQL instance to another (known as the transportable tablespace feature).

  • Tables created in file-per-table tablespaces use the Barracuda file format. The Barracuda file format enables features such as compressed and dynamic row formats.

  • You can enable more efficient storage for tables with large BLOB or TEXT columns using the dynamic row format.

  • File-per-table tablespaces may improve chances for a successful recovery and save time when a corruption occurs, when a server cannot be restarted, or when backup and binary logs are unavailable.

  • File-per-table tablespaces are convenient for per-table status reporting when copying or backing up tables.

  • You can monitor table size at a file system level, without accessing MySQL.

  • Common Linux file systems do not permit concurrent writes to a single file when innodb_flush_method is set to O_DIRECT. As a result, there are possible performance improvements when using file-per-table tablespaces in conjunction with innodb_flush_method.

  • The system tablespace stores the data dictionary and undo logs, and is limited in size by InnoDB tablespace size limits. See Section 15.8.8, “Limits on InnoDB Tables”. With file-per-table tablespaces, each table has its own tablespace, which provides room for growth.

Potential Disadvantages of File-Per-Table Tablespaces

  • With file-per-table tablespaces, each table may have unused space, which can only be utilized by rows of the same table. This could lead to wasted space if not properly managed.

  • fsync operations must run on each open table rather than on a single file. Because there is a separate fsync operation for each file, write operations on multiple tables cannot be combined into a single I/O operation. This may require InnoDB to perform a higher total number of fsync operations.

  • mysqld must keep one open file handle per table, which may impact performance if you have numerous tables in file-per-table tablespaces.

  • More file descriptors are used.

  • innodb_file_per_table is enabled by default in MySQL 5.6.6 and higher. You may consider disabling it if backward compatibility with MySQL 5.5 or 5.1 is a concern. Disabling innodb_file_per_table prevents ALTER TABLE from moving an InnoDB table from the system tablespace to an individual .ibd file in cases where ALTER TABLE recreates the table (ALGORITHM=COPY).

    For example, when restructuring the clustered index for an InnoDB table, the table is re-created using the current setting for innodb_file_per_table. This behavior does not apply when adding or dropping InnoDB secondary indexes. When a secondary index is created without rebuilding the table, the index is stored in the same file as the table data, regardless of the current innodb_file_per_table setting. This behavior also does not apply to tables added to the system tablespace using CREATE TABLE ... TABLESPACE or ALTER TABLE ... TABLESPACE syntax. These tables are not affected by the innodb_file_per_table setting.

  • If many tables are growing there is potential for more fragmentation which can impede DROP TABLE and table scan performance. However, when fragmentation is managed, having files in their own tablespace can improve performance.

  • The buffer pool is scanned when dropping a file-per-table tablespace, which can take several seconds for buffer pools that are tens of gigabytes in size. The scan is performed with a broad internal lock, which may delay other operations. Tables in the system tablespace are not affected.

  • The innodb_autoextend_increment variable, which defines increment size (in MB) for extending the size of an auto-extending shared tablespace file when it becomes full, does not apply to file-per-table tablespace files, which are auto-extending regardless of the innodb_autoextend_increment setting. The initial extensions are by small amounts, after which extensions occur in increments of 4MB.

15.7.4.1 Enabling and Disabling File-Per-Table Tablespaces

The innodb_file_per_table option is enabled by default as of MySQL 5.6.6.

To set the innodb_file_per_table option at startup, start the server with the --innodb_file_per_table command-line option, or add this line to the [mysqld] section of my.cnf:

[mysqld]
innodb_file_per_table=1

You can also set innodb_file_per_table dynamically, while the server is running:

SET GLOBAL innodb_file_per_table=1;

With innodb_file_per_table enabled, you can store InnoDB tables in a tbl_name.ibd file. Unlike the MyISAM storage engine, with its separate tbl_name.MYD and tbl_name.MYI files for indexes and data, InnoDB stores the data and the indexes together in a single .ibd file. The tbl_name.frm file is still created as usual.

If you disable innodb_file_per_table in your startup options and restart the server, or disable it with the SET GLOBAL command, InnoDB creates new tables inside the system tablespace unless you have explicitly placed the table in file-per-table tablespace or general tablespace using the CREATE TABLE ... TABLESPACE option.

You can always read and write any InnoDB tables, regardless of the file-per-table setting.

To move a table from the system tablespace to its own tablespace, change the innodb_file_per_table setting and rebuild the table:

SET GLOBAL innodb_file_per_table=1;
ALTER TABLE table_name ENGINE=InnoDB;

Tables added to the system tablespace using CREATE TABLE ... TABLESPACE or ALTER TABLE ... TABLESPACE syntax are not affected by the innodb_file_per_table setting. To move these tables from the system tablespace to a file-per-table tablespace, they must be moved explicitly using ALTER TABLE ... TABLESPACE syntax.

Note

InnoDB always needs the system tablespace because it puts its internal data dictionary and undo logs there. The .ibd files are not sufficient for InnoDB to operate.

When a table is moved out of the system tablespace into its own .ibd file, the data files that make up the system tablespace remain the same size. The space formerly occupied by the table can be reused for new InnoDB data, but is not reclaimed for use by the operating system. When moving large InnoDB tables out of the system tablespace, where disk space is limited, you may prefer to enable innodb_file_per_table and recreate the entire instance using the mysqldump command. As mentioned above, tables added to the system tablespace using CREATE TABLE ... TABLESPACE or ALTER TABLE ... TABLESPACE syntax are not affected by the innodb_file_per_table setting. These tables must be moved individually.

15.7.5 Creating a File-Per-Table Tablespace Outside the Data Directory

To create a new InnoDB file-per-table tablespace in a specific location outside the MySQL data directory, use the DATA DIRECTORY = absolute_path_to_directory clause of the CREATE TABLE statement.

Plan the location in advance, because you cannot use the DATA DIRECTORY clause with the ALTER TABLE statement. The directory you specify could be on another storage device with particular performance or capacity characteristics, such as a fast SSD or a high-capacity HDD.

Within the destination directory, MySQL creates a subdirectory corresponding to the database name, and within that a .ibd file for the new table. In the database directory beneath the MySQL DATADIR directory, MySQL creates a table_name.isl file containing the path name for the table. The .isl file is treated by MySQL like a symbolic link. (Using actual symbolic links has never been supported for InnoDB tables.)

The following example demonstrates creating a file-per-table tablespace outside the MySQL data directory. It shows the .ibd created in the specified directory, and the .isl created in the database directory beneath the MySQL data directory.

mysql> USE test;
Database changed

mysql> SHOW VARIABLES LIKE 'innodb_file_per_table';
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table | ON    |
+-----------------------+-------+
1 row in set (0.00 sec)

mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY) DATA DIRECTORY = '/alternative/directory';
Query OK, 0 rows affected (0.03 sec)

# MySQL creates a .ibd file for the new table in a subdirectory that corresponding
# to the database name

db_user@ubuntu:~/alternative/directory/test$ ls
t1.ibd

# MySQL creates a .isl file containing the path name for the table in a directory
# beneath the MySQL data directory

db_user@ubuntu:~/mysql/data/test$ ls
db.opt  t1.frm  t1.isl

You can also use CREATE TABLE ... TABLESPACE in combination with the DATA DIRECTORY clause to create a file-per-table tablespace outside the MySQL data directory. To do so, you must specify innodb_file_per_table as the tablespace name.

CREATE TABLE t2 (c1 INT PRIMARY KEY) TABLESPACE = innodb_file_per_table
  DATA DIRECTORY = '/alternative/directory';

You do not have to enable innodb_file_per_table when using this method.

Usage Notes:

  • MySQL initially holds the .ibd file open, preventing you from dismounting the device, but might eventually close the table if the server is busy. Be careful not to accidentally dismount an external device while MySQL is running, or to start MySQL while the device is disconnected. Attempting to access a table when the associated .ibd file is missing causes a serious error that requires a server restart.

    A server restart might fail if the .ibd file is still not at the expected path. In this case, manually remove the table_name.isl file in the database directory, and after restarting perform a DROP TABLE to delete the .frm file and remove the information about the table from the data dictionary.

  • Before tables on an NFS-mounted volume, review potential issues outlined in Using NFS with MySQL.

  • If you use an LVM snapshot, file copy, or other file-based mechanism to back up the .ibd file, always use the FLUSH TABLES ... FOR EXPORT statement first to make sure all changes that were buffered in memory are flushed to disk before the backup occurs.

  • The DATA DIRECTORY clause is a supported alternative to using symbolic links, which has always been problematic and was never supported for individual InnoDB tables.

15.7.6 Copying File-Per-Table Tablespaces to Another Server

This section describes how to copy file-per-table tablespaces from one database server to another, otherwise known as the Transportable Tablespaces feature.

For information about other InnoDB table copying methods, see Section 15.8.4, “Moving or Copying InnoDB Tables to Another Machine”.

There are many reasons why you might copy an InnoDB file-per-table tablespace to a different database server:

  • To run reports without putting extra load on a production server.

  • To set up identical data for a table on a new slave server.

  • To restore a backed-up version of a table or partition after a problem or mistake.

  • As a faster way of moving data around than importing the results of a mysqldump command. The data is available immediately, rather than having to be re-inserted and the indexes rebuilt.

  • To move a file-per-table tablespace to a server with storage medium that better suits system requirements. For example, you may want to have busy tables on an SSD device, or large tables on a high-capacity HDD device.

Limitations and Usage Notes

  • The tablespace copy procedure is only possible when innodb_file_per_table is set to ON, which is the default setting as of MySQL 5.6.6. Tables residing in the shared system tablespace cannot be quiesced.

  • When a table is quiesced, only read-only transactions are allowed on the affected table.

  • When importing a tablespace, the page size must match the page size of the importing instance.

  • DISCARD TABLESPACE is not supported for tablespaces with a parent-child (primary key-foreign key) relationship when foreign_key_checks is set to 1. Before discarding a tablespace for parent-child tables, set foreign_key_checks=0. Partitioned InnoDB tables do not support foreign keys.

  • ALTER TABLE ... IMPORT TABLESPACE does not enforce foreign key constraints on imported data. If there are foreign key constraints between tables, all tables should be exported at the same (logical) point in time. Partitioned InnoDB tables do not support foreign keys.

  • ALTER TABLE ... IMPORT TABLESPACE and ALTER TABLE ... IMPORT PARTITION ... TABLESPACE do not require a .cfg metadata file to import a tablespace. However, metadata checks are not performed when importing without a .cfg file, and a warning similar to the following is issued:

    Message: InnoDB: IO Read error: (2, No such file or directory) Error opening '.\
    test\t.cfg', will attempt to import without schema verification
    1 row in set (0.00 sec)
    

    The ability to import without a .cfg file may be more convenient when no schema mismatches are expected. Additionally, the ability to import without a .cfg file could be useful in crash recovery scenarios in which metadata cannot be collected from an .ibd file.

  • Due to a .cfg metadata file limitation, schema mismatches are not reported for partition type or partition definition differences when importing tablespace files for partitioned tables. Column differences are reported.

  • When running ALTER TABLE ... DISCARD PARTITION ... TABLESPACE and ALTER TABLE ... IMPORT PARTITION ... TABLESPACE on subpartitioned tables, both partition and subpartition table names are allowed. When a partition name is specified, subpartitions of that partition are included in the operation.

  • In MySQL 5.6 or later, importing a tablespace file from another server works if both servers have GA (General Availability) status and their versions are within the same series. Otherwise, the file must have been created on the server into which it is imported.

  • In replication scenarios, innodb_file_per_table must be set to ON on both the master and slave.

  • On Windows, InnoDB stores database, tablespace, and table names internally in lowercase. To avoid import problems on case-sensitive operating systems such as Linux and UNIX, create all databases, tablespaces, and tables using lowercase names. A convenient way to accomplish this is to add the following line to the [mysqld] section of your my.cnf or my.ini file before creating databases, tablespaces, or tables:

    [mysqld]
    lower_case_table_names=1
    
  • ALTER TABLE ... DISCARD TABLESPACE and ALTER TABLE ...IMPORT TABLESPACE are not supported with tables that belong to an InnoDB general tablespace. For more information, see CREATE TABLESPACE.

  • The default row format for InnoDB tables is configurable using the innodb_default_row_format configuration option. Attempting to import a table that does not explicitly define a row format (ROW_FORMAT), or that uses ROW_FORMAT=DEFAULT, could result in a schema mismatch error if the innodb_default_row_format setting on the source server differs from the setting on the destination server. For related information, see Section 15.11.2, “Specifying the Row Format for a Table”.

  • When exporting a tablespace that is encrypted using the InnoDB tablespace encryption feature, InnoDB generates a .cfp file in addition to a .cfg metadata file. The .cfp file must be copied to the destination server together with the .cfg file and tablespace file before performing the ALTER TABLE ... IMPORT TABLESPACE operation on the destination server. The .cfp file contains a transfer key and an encrypted tablespace key. On import, InnoDB uses the transfer key to decrypt the tablespace key. For related information, see Section 15.7.10, “InnoDB Tablespace Encryption”.

15.7.6.1 Transportable Tablespace Examples

Note

If you are transporting tables that are encrypted using the InnoDB tablespace encryption, see Limitations and Usage Notes before you begin for additional procedural information.

Example 1: Copying an InnoDB Table From One Server To Another

This procedure demonstrates how to copy a regular InnoDB table from a running MySQL server instance to another running instance. The same procedure with minor adjustments can be used to perform a full table restore on the same instance.

  1. On the source server, create a table if one does not exist:

    mysql> use test;
    mysql> CREATE TABLE t(c1 INT) engine=InnoDB;
    
  2. On the destination server, create a table if one does not exist:

    mysql> use test;
    mysql> CREATE TABLE t(c1 INT) engine=InnoDB;
    
  3. On the destination server, discard the existing tablespace. (Before a tablespace can be imported, InnoDB must discard the tablespace that is attached to the receiving table.)

    mysql> ALTER TABLE t DISCARD TABLESPACE;
    
  4. On the source server, run FLUSH TABLES ... FOR EXPORT to quiesce the table and create the .cfg metadata file:

    mysql> use test;
    mysql> FLUSH TABLES t FOR EXPORT;
    

    The metadata (.cfg) is created in the InnoDB data directory.

    Note

    FLUSH TABLES ... FOR EXPORT is available as of MySQL 5.6.6. The statement ensures that changes to the named table have been flushed to disk so that a binary table copy can be made while the server is running. When FLUSH TABLES ... FOR EXPORT is run, InnoDB produces a .cfg file in the same database directory as the table. The .cfg file contains metadata used for schema verification when importing the tablespace file.

  5. Copy the .ibd file and .cfg metadata file from the source server to the destination server. For example:

    shell> scp /path/to/datadir/test/t.{ibd,cfg} destination-server:/path/to/datadir/test
    
    Note

    The .ibd file and .cfg file must be copied before releasing the shared locks, as described in the next step.

  6. On the source server, use UNLOCK TABLES to release the locks acquired by FLUSH TABLES ... FOR EXPORT:

    mysql> use test;
    mysql> UNLOCK TABLES;
    
  7. On the destination server, import the tablespace:

    mysql> use test;
    mysql> ALTER TABLE t IMPORT TABLESPACE;
    
    Note

    The ALTER TABLE ... IMPORT TABLESPACE feature does not enforce foreign key constraints on imported data. If there are foreign key constraints between tables, all tables should be exported at the same (logical) point in time. In this case you would stop updating the tables, commit all transactions, acquire shared locks on the tables, and then perform the export operation.

Example 2: Copying an InnoDB Partitioned Table From One Server To Another

This procedure demonstrates how to copy a partitioned InnoDB table from a running MySQL server instance to another running instance. The same procedure with minor adjustments can be used to perform a full restore of a partitioned InnoDB table on the same instance.

  1. On the source server, create a partitioned table if one does not exist. In the following example, a table with three partitions (p0, p1, p2) is created:

    mysql> use test;
    mysql> CREATE TABLE t1 (i int) ENGINE = InnoDB PARTITION BY KEY (i) PARTITIONS 3;
    

    In the /datadir/test directory, there is a separate tablespace (.ibd) file for each of the three partitions.

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1.frm  t1#P#p0.ibd  t1#P#p1.ibd  t1#P#p2.ibd
    
  2. On the destination server, create the same partitioned table:

    mysql> use test;
    mysql> CREATE TABLE t1 (i int) ENGINE = InnoDB PARTITION BY KEY (i) PARTITIONS 3;
    

    In the /datadir/test directory, there is a separate tablespace (.ibd) file for each of the three partitions.

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1.frm  t1#P#p0.ibd  t1#P#p1.ibd  t1#P#p2.ibd
    
  3. On the destination server, discard the tablespace for the partitioned table. (Before the tablespace can be imported on the destination server, the tablespace that is attached to the receiving table must be discarded.)

    mysql> ALTER TABLE t1 DISCARD TABLESPACE;
    

    The three .ibd files that make up the tablespace for the partitioned table are discarded from the /datadir/test directory, leaving the following files:

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1.frm
    
  4. On the source server, run FLUSH TABLES ... FOR EXPORT to quiesce the partitioned table and create the .cfg metadata files:

    mysql> use test;
    mysql> FLUSH TABLES t1 FOR EXPORT;
    

    Metadata (.cfg) files, one for each tablespace (.ibd) file, are created in the /datadir/test directory on the source server:

    mysql> \! ls /path/to/datadir/test/
    db.opt t1#P#p0.ibd  t1#P#p1.ibd  t1#P#p2.ibd
    t1.frm  t1#P#p0.cfg  t1#P#p1.cfg  t1#P#p2.cfg
    
    Note

    FLUSH TABLES ... FOR EXPORT statement ensures that changes to the named table have been flushed to disk so that binary table copy can be made while the server is running. When FLUSH TABLES ... FOR EXPORT is run, InnoDB produces a .cfg metadata file for the table's tablespace files in the same database directory as the table. The .cfg files contain metadata used for schema verification when importing tablespace files. FLUSH TABLES ... FOR EXPORT can only be run on the table, not on individual table partitions.

  5. Copy the .ibd and .cfg files from the source server database directory to the destination server database directory. For example:

    shell> scp /path/to/datadir/test/t1*.{ibd,cfg} destination-server:/path/to/datadir/test
    
    Note

    The .ibd and .cfg files must be copied before releasing the shared locks, as described in the next step.

  6. On the source server, use UNLOCK TABLES to release the locks acquired by FLUSH TABLES ... FOR EXPORT:

    mysql> use test;
    mysql> UNLOCK TABLES;
    
  7. On the destination server, import the tablespace for the partitioned table:

    mysql> use test;
    mysql> ALTER TABLE t1 IMPORT TABLESPACE;
    
Example 3: Copying InnoDB Table Partitions From One Server To Another

This procedure demonstrates how to copy InnoDB table partitions from a running MySQL server instance to another running instance. The same procedure with minor adjustments can be used to perform a restore of InnoDB table partitions on the same instance. In the following example, a partitioned table with four partitions (p0, p1, p2, p3) is created on the source server. Two of the partitions (p2 and p3) are copied to the destination server.

  1. On the source server, create a partitioned table if one does not exist. In the following example, a table with four partitions (p0, p1, p2, p3) is created:

    mysql> use test;
    mysql> CREATE TABLE t1 (i int) ENGINE = InnoDB PARTITION BY KEY (i) PARTITIONS 4;
    

    In the /datadir/test directory, there is a separate tablespace (.ibd) file for each of the four partitions.

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1.frm  t1#P#p0.ibd  t1#P#p1.ibd  t1#P#p2.ibd t1#P#p3.ibd
    
  2. On the destination server, create the same partitioned table:

    mysql> use test;
    mysql> CREATE TABLE t1 (i int) ENGINE = InnoDB PARTITION BY KEY (i) PARTITIONS 4;
    

    In the /datadir/test directory, there is a separate tablespace (.ibd) file for each of the four partitions.

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1.frm  t1#P#p0.ibd  t1#P#p1.ibd  t1#P#p2.ibd t1#P#p3.ibd
    
  3. On the destination server, discard the tablespace partitions that you plan to import from the source server. (Before tablespace partitions can be imported on the destination server, the corresponding partitions that are attached to the receiving table must be discarded.)

    mysql> ALTER TABLE t1 DISCARD PARTITION p2, p3 TABLESPACE;
    

    The .ibd files for the two discarded partitions are removed from the /datadir/test directory on the destination server, leaving the following files:

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1.frm  t1#P#p0.ibd  t1#P#p1.ibd
    
    Note

    When ALTER TABLE ... DISCARD PARTITION ... TABLESPACE is run on subpartitioned tables, both partition and subpartition table names are allowed. When a partition name is specified, subpartitions of that partition are included in the operation.

  4. On the source server, run FLUSH TABLES ... FOR EXPORT to quiesce the partitioned table and create the .cfg metadata files.

    mysql> use test;
    mysql> FLUSH TABLES t1 FOR EXPORT;
    

    The metadata files (.cfg files) are created in the /datadir/test directory on the source server. There is a .cfg file for each tablespace (.ibd) file.

    mysql> \! ls /path/to/datadir/test/
    db.opt  t1#P#p0.ibd  t1#P#p1.ibd  t1#P#p2.ibd t1#P#p3.ibd
    t1.frm  t1#P#p0.cfg  t1#P#p1.cfg  t1#P#p2.cfg t1#P#p3.cfg
    
    Note

    FLUSH TABLES ... FOR EXPORT statement ensures that changes to the named table have been flushed to disk so that binary table copy can be made while the server is running. When FLUSH TABLES ... FOR EXPORT is run, InnoDB produces a .cfg metadata file for the table's tablespace files in the same database directory as the table. The .cfg files contain metadata used for schema verification when importing tablespace files. FLUSH TABLES ... FOR EXPORT can only be run on the table, not on individual table partitions.

  5. Copy the .ibd and .cfg files from the source server database directory to the destination server database directory. In this example, only the .ibd and .cfg files for partition 2 (p2) and partition 3 (p3) are copied to the data directory on the destination server. Partition 0 (p0) and partition 1 (p1) remain on the source server.

    shell> scp t1#P#p2.ibd  t1#P#p2.cfg t1#P#p3.ibd t1#P#p3.cfg destination-server:/path/to/datadir/test
    
    Note

    The .ibd files and .cfg files must be copied before releasing the shared locks, as described in the next step.

  6. On the source server, use UNLOCK TABLES to release the locks acquired by FLUSH TABLES ... FOR EXPORT:

    mysql> use test;
    mysql> UNLOCK TABLES;
    
  7. On the destination server, import the tablespace partitions (p2 and p3):

    mysql> use test;
    mysql> ALTER TABLE t1 IMPORT PARTITION p2, p3 TABLESPACE;
    
    Note

    When ALTER TABLE ... IMPORT PARTITION ... TABLESPACE is run on subpartitioned tables, both partition and subpartition table names are allowed. When a partition name is specified, subpartitions of that partition are included in the operation.

15.7.6.2 Transportable Tablespace Internals

The following information describes internals and error log messaging for the transportable tablespaces copy procedure for a regular InnoDB table.

When ALTER TABLE ... DISCARD TABLESPACE is run on the destination instance:

  • The table is locked in X mode.

  • The tablespace is detached from the table.

When FLUSH TABLES ... FOR EXPORT is run on the source instance:

  • The table being flushed for export is locked in shared mode.

  • The purge coordinator thread is stopped.

  • Dirty pages are synchronized to disk.

  • Table metadata is written to the binary .cfg file.

Expected error log messages for this operation:

2013-09-24T13:10:19.903526Z 2 [Note] InnoDB: Sync to disk of '"test"."t"' started.
2013-09-24T13:10:19.903586Z 2 [Note] InnoDB: Stopping purge
2013-09-24T13:10:19.903725Z 2 [Note] InnoDB: Writing table metadata to './test/t.cfg'
2013-09-24T13:10:19.904014Z 2 [Note] InnoDB: Table '"test"."t"' flushed to disk
 

When UNLOCK TABLES is run on the source instance:

  • The binary .cfg file is deleted.

  • The shared lock on the table or tables being imported is released and the purge coordinator thread is restarted.

Expected error log messages for this operation:

2013-09-24T13:10:21.181104Z 2 [Note] InnoDB: Deleting the meta-data file './test/t.cfg'
2013-09-24T13:10:21.181180Z 2 [Note] InnoDB: Resuming purge

When ALTER TABLE ... IMPORT TABLESPACE is run on the destination instance, the import algorithm performs the following operations for each tablespace being imported:

  • Each tablespace page is checked for corruption.

  • The space ID and log sequence numbers (LSNs) on each page are updated

  • Flags are validated and LSN updated for the header page.

  • Btree pages are updated.

  • The page state is set to dirty so that it is written to disk.

Expected error log messages for this operation:

2013-07-18 15:15:01 34960 [Note] InnoDB: Importing tablespace for table 'test/t' that was exported from host 'ubuntu'
2013-07-18 15:15:01 34960 [Note] InnoDB: Phase I - Update all pages
2013-07-18 15:15:01 34960 [Note] InnoDB: Sync to disk
2013-07-18 15:15:01 34960 [Note] InnoDB: Sync to disk - done!
2013-07-18 15:15:01 34960 [Note] InnoDB: Phase III - Flush changes to disk
2013-07-18 15:15:01 34960 [Note] InnoDB: Phase IV - Flush complete
Note

You may also receive a warning that a tablespace is discarded (if you discarded the tablespace for the destination table) and a message stating that statistics could not be calculated due to a missing .ibd file:

2013-07-18 15:14:38 34960 [Warning] InnoDB: Table "test"."t" tablespace is set as discarded.
2013-07-18 15:14:38 7f34d9a37700 InnoDB: cannot calculate statistics for table "test"."t" because the .ibd file is missing. For help, please refer to
http://dev.mysql.com/doc/refman/5.7/en/innodb-troubleshooting.html

15.7.7 Storing InnoDB Undo Logs in Separate Tablespaces

You can store InnoDB undo logs in one or more separate undo tablespaces outside of the system tablespace. This layout is different from the default configuration in which undo logs reside in the system tablespace. The I/O patterns for undo logs make undo tablespaces good candidates to move to SSD storage, while keeping the system tablespace on hard disk storage. Users cannot drop the separate tablespaces created to hold InnoDB undo logs, or the individual segments inside those tablespaces. However, undo logs stored in undo tablespaces can be truncated. For more information, see Section 15.7.8, “Truncating Undo Logs That Reside in Undo Tablespaces”.

Because undo tablespace files handle I/O operations formerly done inside the system tablespace, the definition of system tablespace is extended to include undo tablespace files.

The undo tablespace feature involves the following configuration options:

The innodb_undo_tablespaces and innodb_undo_directory configuration options are non-dynamic startup options that can only be enabled when initializing a MySQL instance, which means that undo tablespaces can only be created when initializing a MySQL instance.

Configuring Separate Undo Tablespaces

The following procedure assumes the configuration is performed on a test instance prior to production deployment.

  1. Chose a directory location where you want InnoDB to create separate undo tablespaces for the undo logs. Specify the directory path as the argument to the innodb_undo_directory option in your MySQL configuration file or startup script. If no path is specified, undo tablespaces are created in the MySQL data directory, as defined by datadir.

  2. Decide on a starting value for the innodb_undo_logs option, which defines the number of rollback segments used by InnoDB. (Undo logs exist within undo log segments, which are contained within rollback segments.) You can start with a relatively low value and increase it over time to examine the effect on performance.

    One rollback segment is always assigned to the system tablespace, and 32 rollback segments are reserved for use by temporary tables and are hosted in the temporary tablespace (ibtmp1). Therefore, to allocate rollback segments to undo tablespaces, innodb_undo_logs must be set to a value greater than 33. For example, if you have two undo tablespaces (innodb_undo_tablespaces=2), innodb_undo_logs must be set to 35 to assign one rollback segment to each of the two undo tablespaces.

    Note

    When you configure separate undo tablespaces, the rollback segment in the system tablespace is rendered inactive.

  3. Decide on a non-zero value for the innodb_undo_tablespaces option. The rollback segments specified by the innodb_undo_logs value are divided between this number of separate tablespaces. The innodb_undo_tablespaces value is fixed for the life of the MySQL instance, so if you are uncertain about the optimal value, estimate on the high side.

  4. Create a new MySQL instance, using the values you chose in the configuration file or in your MySQL startup script. Use a realistic workload with data volume similar to your production servers. Alternatively, use the transportable tablespaces feature to copy existing database tables to your newly configured MySQL instance. See Section 15.7.6, “Copying File-Per-Table Tablespaces to Another Server” for more information.

  5. Benchmark the performance of I/O intensive workloads.

  6. Periodically increase the value of innodb_undo_logs and rerun performance tests. Find the value where you stop experiencing gains in I/O performance.

  7. Deploy a new production instance using the ideal settings for these options. Set it up as a slave server in a replication configuration, or transfer data from an earlier production instance.

Performance and Scalability Considerations

Keeping the undo logs in separate files allows the MySQL team to implement I/O and memory optimizations related to this transactional data. For example, because the undo data is written to disk and then rarely used (only in case of crash recovery), it does not need to be kept in the file system memory cache, in turn allowing a higher percentage of system memory to be devoted to the InnoDB buffer pool.

The typical SSD best practice of keeping the InnoDB system tablespace on a hard drive and moving the per-table tablespaces to SSD, is assisted by moving the undo information into separate tablespace files.

Internals

The physical undo tablespace files are named undoN.ibd, where N is the space ID, including leading zeros.

Prior to MySQL 5.7.18, space IDs were assigned to undo tablespaces in a consecutive order starting with space ID 1. As of MySQL 5.7.18, The first undo tablespace can be assigned a space ID other than 1. Space ID values for undo tablespaces are still assigned in a consecutive order.

MySQL instances containing separate undo tablespaces cannot be downgraded to earlier releases such as MySQL 5.5 or 5.1.

15.7.8 Truncating Undo Logs That Reside in Undo Tablespaces

You can truncate undo logs that reside in undo tablespaces, provided that the following conditions are true:

  • Your MySQL instance is configured with a minimum of two undo tablespaces (innodb_undo_tablespaces=2). When an undo tablespace is truncated, it is temporarily taken offline. For the server to function, there must be at least one other active undo tablespace. The number of undo tablespaces is defined by the innodb_undo_tablespaces option, which can only be set when the MySQL instance is initialized. The default value is 0. To check the value of innodb_undo_tablespaces, submit the following query:

    mysql> SELECT @@innodb_undo_tablespaces;
    +---------------------------+
    | @@innodb_undo_tablespaces |
    +---------------------------+
    |                         2 |
    +---------------------------+
    1 row in set (0.00 sec)
  • innodb_undo_logs, which defines the number of rollback segments used by InnoDB, must be set to 35 or greater. A setting of 35 or greater ensures that a redo-enabled undo log is assigned to each of the two undo tablespaces. With an innodb_undo_logs setting of 35:

    • The first rollback segment always resides in the system tablespace (when undo tablespaces are present, this rollback segment is inactive)

    • Rollback segments 2 to 33 reside in the shared temporary tablespace (ibtmp1)

    • The 34th rollback segment resides in the first undo tablespace (if present)

    • The 35th rollback segment resides in the second undo tablespace (if present)

    There is a many-to-one relationship between rollback segments and undo tablespaces. If the number of allocated rollback segments is greater than 35, the additional rollback segments are assigned to undo tablespaces in a round-robin fashion. For example, if you have 2 undo tablespaces (undo tablespace 1 and undo-tablespace 2) and innodb_undo_logs=37, undo-tablespace 1 and undo-tablespace 2 would each be assigned a second rollback segment.

    By default, innodb_undo_logs is set to 128, which is also the maximum value. To check the value of innodb_undo_logs, submit the following query:

    mysql> SELECT @@innodb_undo_logs;
    +--------------------+
    | @@innodb_undo_logs |
    +--------------------+
    |                128 |
    +--------------------+
    1 row in set (0.00 sec)

    innodb_undo_logs is a dynamic global variable and can be configured using a SET GLOBAL statement:

    mysql> SET GLOBAL innodb_undo_logs=128;

Enabling Truncation of Undo Tablespaces

To truncate undo logs that reside in undo tablespaces, you must first enable innodb_undo_log_truncate.

mysql> SET GLOBAL innodb_undo_log_truncate=ON;

When you enable innodb_undo_log_truncate, undo tablespace files that exceed the size limit defined by innodb_max_undo_log_size are marked for truncation. innodb_max_undo_log_size is a dynamic global variable with a default value of 1024 MiB (1073741824 bytes).

mysql> SELECT @@innodb_max_undo_log_size;
+----------------------------+
| @@innodb_max_undo_log_size |
+----------------------------+
|                 1073741824 |
+----------------------------+
1 row in set (0.00 sec)

You can configure innodb_max_undo_log_size using a SET GLOBAL statement:

mysql> SET GLOBAL innodb_max_undo_log_size=2147483648;
Query OK, 0 rows affected (0.00 sec)

When innodb_undo_log_truncate is enabled:

  1. Undo tablespaces that exceed the innodb_max_undo_log_size setting are marked for truncation. Selection of an undo tablespace for truncation is performed in a round-robin fashion to avoid truncating the same undo tablespace each time.

  2. Rollback segments residing in the selected undo tablespace are made inactive so that they are not allocated to new transactions. Existing transactions that are currently using rollback segments are allowed to complete.

  3. The purge system frees rollback segments that are no longer needed.

  4. After all rollback segments in the undo tablespace are freed, the truncate operation runs and the undo tablespace is truncated to its initial size. The initial size of an undo tablespace file is 10MB.

    Note

    If you check the size of an undo tablespace after a truncation operation, the file size may be larger than 10MB due to immediate use following the completion of the truncation operation. The innodb_undo_directory option defines the location of undo tablespace files. The default value of . represents directory where InnoDB creates its other log files by default.

    mysql> select @@innodb_undo_directory;
    +-------------------------+
    | @@innodb_undo_directory |
    +-------------------------+
    | .                       |
    +-------------------------+
    1 row in set (0.00 sec)
  5. The rollback segments are reactivated so that they can be allocated to new transactions.

Expediting Truncation of Undo Tablespace Files

An undo tablespace cannot be truncated until its rollback segments are freed. Normally, the purge system frees rollback segments once every 128 times that purge is invoked. To expedite the truncation of undo tablespaces, you can use the innodb_purge_rseg_truncate_frequency option to temporarily increase the frequency with which the purge system frees rollback segments. By default, innodb_purge_rseg_truncate_frequency is 128, which is also the maximum value.

mysql> select @@innodb_purge_rseg_truncate_frequency;
+----------------------------------------+
| @@innodb_purge_rseg_truncate_frequency |
+----------------------------------------+
|                                    128 |
+----------------------------------------+
1 row in set (0.00 sec)

To increase the frequency with which the purge thread frees rollback segments, decrease the value of innodb_purge_rseg_truncate_frequency. For example:

mysql> SET GLOBAL innodb_purge_rseg_truncate_frequency=32;
Query OK, 0 rows affected (0.00 sec)

Performance Impact of Truncating Undo Tablespace Files Online

While an undo tablespace truncation operation is in progress, rollback segments in one undo tablespace are temporarily deactivated. For example, if you have 2 undo tablespaces (innodb_undo_tablespaces=2) and 128 allocated undo logs (innodb_undo_logs=128), 95 of the undo logs reside in the two undo tablespaces (48 rollback segments in one undo tablespace and 47 in the other). If the first undo tablespace is taken offline, 48 undo logs are made inactive, reducing the undo log resource by slightly more than half. While the truncation operation is in progress, the remaining undo logs assume responsibility for the entire system load, which may result in a slight performance degradation. The degree of performance degradation depends on a number of factors including:

  • Number of undo tablespaces

  • Number of undo logs

  • Undo tablespace size

  • Speed of the I/O susbsystem

  • Existing long running transactions

  • System load

15.7.9 InnoDB General Tablespaces

A general tablespace is a new type of InnoDB tablespace, introduced in MySQL 5.7. The general tablespace feature provides the following capabilities:

  • Similar to the system tablespace, general tablespaces are shared tablespaces that can store data for multiple tables.

  • General tablespaces have a potential memory advantage over file-per-table tablespaces. The server keeps tablespace metadata in memory for the lifetime of a tablespace. Multiple tables in fewer general tablespaces consume less memory for tablespace metadata than the same number of tables in separate file-per-table tablespaces.

  • General tablespace data files may be placed in a directory relative to or independent of the MySQL data directory, which provides you with many of the data file and storage management capabilities of file-per-table tablespaces. As with file-per-table tablespaces, the ability to place data files outside of the MySQL data directory allows you to manage performance of critical tables separately, setup RAID or DRBD for specific tables, or bind tables to particular disks, for example.

  • General tablespaces support both Antelope and Barracuda file formats, and therefore support all table row formats and associated features. With support for both file formats, general tablespaces have no dependence on innodb_file_format or innodb_file_per_table settings, nor do these variables have any effect on general tablespaces.

  • The TABLESPACE option can be used with CREATE TABLE to create tables in a general tablespaces, file-per-table tablespace, or in the system tablespace.

  • The TABLESPACE option can be used with ALTER TABLE to move tables between general tablespaces, file-per-table tablespaces, and the system tablespace. Previously, it was not possible to move a table from a file-per-table tablespace to the system tablespace. With the general tablespace feature, you can now do so.

Creating a General Tablespace

General tablespaces are created using CREATE TABLESPACE syntax.

CREATE TABLESPACE tablespace_name
    ADD DATAFILE 'file_name'
    [FILE_BLOCK_SIZE = value]
        [ENGINE [=] engine_name]

A general tablespace may be created in the MySQL data directory or in a directory outside of the MySQL data directory. To avoid conflicts with implicitly created file-per-table tablespaces, creating a general tablespace in a subdirectory under the MySQL data directory is not supported. Also, when creating a general tablespace outside of the MySQL data directory, the directory must exist prior to creating the tablespace.

An isl file is created in the MySQL data directory when a general tablespace is created outside of the MySQL data directory.

Examples:

Creating a general tablespace in the MySQL data directory:

mysql> CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' Engine=InnoDB;

Creating a general tablespace in a directory outside of the MySQL data directory:

mysql> CREATE TABLESPACE `ts1` ADD DATAFILE '/my/tablespace/directory/ts1.ibd' Engine=InnoDB;

You can specify a path that is relative to the MySQL data directory as long as the tablespace directory is not under the MySQL data directory. In this example, the my_tablespace directory is at the same level as the MySQL data directory:

mysql> CREATE TABLESPACE `ts1` ADD DATAFILE '../my_tablespace/ts1.ibd' Engine=InnoDB;
Note

The ENGINE = InnoDB clause must be defined as part of the CREATE TABLESPACE statement or InnoDB must be defined as the default storage engine (default_storage_engine=InnoDB).

Adding Tables to a General Tablespace

After creating an InnoDB general tablespace, you can use CREATE TABLE tbl_name ... TABLESPACE [=] tablespace_name or ALTER TABLE tbl_name TABLESPACE [=] tablespace_name to add tables to the tablespace, as shown in the following examples:

CREATE TABLE:

mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=COMPACT;

ALTER TABLE:

mysql> ALTER TABLE t2 TABLESPACE ts1;

For detailed syntax information, see CREATE TABLE and ALTER TABLE.

General Tablespace Row Format Support

General tablespaces support all table row formats (REDUNDANT, COMPACT, DYNAMIC, COMPRESSED) with the caveat that compressed and uncompressed tables cannot coexist in the same general tablespace due to different physical page sizes.

For a general tablespace to contain compressed tables (ROW_FORMAT=COMPRESSED), FILE_BLOCK_SIZE must be specified, and the FILE_BLOCK_SIZE value must be a valid compressed page size in relation to the innodb_page_size value. Also, the physical page size of the compressed table (KEY_BLOCK_SIZE) must be equal to FILE_BLOCK_SIZE/1024. For example, if innodb_page_size=16K and FILE_BLOCK_SIZE=8K, the KEY_BLOCK_SIZE of the table must be 8.

The following table shows permitted FILE_BLOCK_SIZE and KEY_BLOCK_SIZE values for each innodb_page_size value. FILE_BLOCK_SIZE values may also be specified in bytes. To determine a valid KEY_BLOCK_SIZE value for a given FILE_BLOCK_SIZE, divide the FILE_BLOCK_SIZE value by 1024. Table compression is not support for 32K and 64K InnoDB page sizes. For more information about KEY_BLOCK_SIZE, see CREATE TABLE, and Section 15.9.1.2, “Creating Compressed Tables”.

Table 15.5 FILE_BLOCK_SIZE and KEY_BLOCK_SIZE Values for CREATE TABLESPACE

InnoDB Page Size (innodb_page_size)Permitted FILE_BLOCK_SIZE ValuesPermitted KEY_BLOCK_SIZE Values
64K64K (65536)Compression is not supported
32K32K (32768)Compression is not supported
16K16K (16384)N/A: If innodb_page_size is equal to FILE_BLOCK_SIZE, the tablespace cannot contain a compressed table.
8K (8192)8
4K (4096)4
2K (2048)2
1K (1024)1
8K8K (8192)N/A: If innodb_page_size is equal to FILE_BLOCK_SIZE, the tablespace cannot contain a compressed table.
4K (4096)4
2K (2048)2
1K (1024)1
4K4K (4096)N/A: If innodb_page_size is equal to FILE_BLOCK_SIZE, the tablespace cannot contain a compressed table.
2K (2048)2
1K (1024)1

This example demonstrates creating a general tablespace and adding a compressed table. The example assumes a default innodb_page_size of 16K. The FILE_BLOCK_SIZE of 8192 requires that the compressed table have a KEY_BLOCK_SIZE of 8.

mysql> CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' FILE_BLOCK_SIZE = 8192 Engine=InnoDB;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t4 (c1 INT PRIMARY KEY) TABLESPACE ts2 ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=8;
Query OK, 0 rows affected (0.00 sec)

If you do not specify FILE_BLOCK_SIZE when creating a general tablespace, FILE_BLOCK_SIZE defaults to innodb_page_size. When FILE_BLOCK_SIZE is equal to innodb_page_size, the tablespace may only contain tables with an uncompressed row format (COMPACT, REDUNDANT, and DYNAMIC row formats).

Moving Non-Partitioned Tables Between Tablespaces Using ALTER TABLE

You can use ALTER TABLE with the TABLESPACE option to move a non-partitioned InnoDB table to an existing general tablespace, to a new file-per-table tablespace, or to the system tablespace.

Note

Running an ALTER TABLE tbl_name TABLESPACE [=] tablespace_name operation on a partitioned table only modifies the table's default tablespace. It does not move the table's partitions.

To move a non-partitioned table from a file-per-table tablespace or from the system tablespace to a general tablespace, specify the name of the general tablespace. The general tablespace must exist. See CREATE TABLESPACE for more information.

ALTER TABLE tbl_name TABLESPACE [=] tablespace_name

To move a non-partitioned table from a general tablespace or file-per-table tablespace to the system tablespace, specify innodb_system as the tablespace name.

ALTER TABLE tbl_name ... TABLESPACE [=] innodb_system

To move a non-partitioned table from the system tablespace or a general tablespace to a file-per-table tablespace, specify innodb_file_per_table as the tablespace name.

ALTER TABLE tbl_name ... TABLESPACE [=] innodb_file_per_table

ALTER TABLE ... TABLESPACE operations always cause a full table rebuild, even if the TABLESPACE attribute has not changed from its previous value.

ALTER TABLE ... TABLESPACE syntax does not support moving a table from a temporary tablespace to a persistent tablespace.

The DATA DIRECTORY clause is permitted with CREATE TABLE ... TABLESPACE=innodb_file_per_table but is otherwise not supported for use in combination with the TABLESPACE option.

General Tablespace Table Partition Support

The TABLESPACE option may be used to assign individual table partitions or subpartitions to a general tablespace, a separate file-per-table tablespace, or the system tablespace. All partitions must belong to the same storage engine. Usage is demonstrated in the following examples.

mysql> CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' Engine=InnoDB;
mysql> CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' Engine=InnoDB;

mysql> CREATE TABLE t1 (a INT, b INT) ENGINE = InnoDB
    ->  PARTITION BY RANGE(a) SUBPARTITION BY KEY(b) (
    ->    PARTITION p1 VALUES LESS THAN (100) TABLESPACE=`ts1`,
    ->    PARTITION p2 VALUES LESS THAN (1000) TABLESPACE=`ts2`,
    ->    PARTITION p3 VALUES LESS THAN (10000) TABLESPACE `innodb_file_per_table`,
    ->    PARTITION p4 VALUES LESS THAN (100000) TABLESPACE `innodb_system`);

mysql> CREATE TABLE t2 (a INT, b INT) ENGINE = InnoDB
    ->  PARTITION BY RANGE(a) SUBPARTITION BY KEY(b) (
    ->    PARTITION p1 VALUES LESS THAN (100) TABLESPACE=`ts1`
    ->      (SUBPARTITION sp1,
    ->       SUBPARTITION sp2),
    ->    PARTITION p2 VALUES LESS THAN (1000)
    ->      (SUBPARTITION sp3,
    ->       SUBPARTITION sp4 TABLESPACE=`ts2`),
    ->    PARTITION p3 VALUES LESS THAN (10000)
    ->      (SUBPARTITION sp5 TABLESPACE `innodb_system`,
    ->       SUBPARTITION sp6 TABLESPACE `innodb_file_per_table`));

The TABLESPACE option is also supported with ALTER TABLE.

mysql> ALTER TABLE t1 ADD PARTITION (PARTITION p5 VALUES LESS THAN (1000000) TABLESPACE = `ts1`);
Note

If the TABLESPACE = tablespace_name option is not defined, the ALTER TABLE ... ADD PARTITION operation adds the partition to the table's default tablespace, which can be specified at the table level during CREATE TABLE or ALTER TABLE.

To verify that partitions were placed in the specified tablespaces, you can query INFORMATION_SCHEMA.INNODB_SYS_TABLES:

mysql> SELECT NAME, SPACE, SPACE_TYPE FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES
    -> WHERE NAME LIKE '%t1%';
+-----------------------+-------+------------+
| NAME                  | SPACE | SPACE_TYPE |
+-----------------------+-------+------------+
| test/t1#P#p1#SP#p1sp0 |    57 | General    |
| test/t1#P#p2#SP#p2sp0 |    58 | General    |
| test/t1#P#p3#SP#p3sp0 |    59 | Single     |
| test/t1#P#p4#SP#p4sp0 |     0 | System     |
| test/t1#P#p5#SP#p5sp0 |    57 | General    |
+-----------------------+-------+------------+

mysql> SELECT NAME, SPACE, SPACE_TYPE FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES
    -> WHERE NAME LIKE '%t2%';
+---------------------+-------+------------+
| NAME                | SPACE | SPACE_TYPE |
+---------------------+-------+------------+
| test/t2#P#p1#SP#sp1 |    57 | General    |
| test/t2#P#p1#SP#sp2 |    57 | General    |
| test/t2#P#p2#SP#sp3 |    60 | Single     |
| test/t2#P#p2#SP#sp4 |    58 | General    |
| test/t2#P#p3#SP#sp5 |     0 | System     |
| test/t2#P#p3#SP#sp6 |    61 | Single     |
+---------------------+-------+------------+

Moving Table Partitions Between Tablespaces Using ALTER TABLE

To move table partitions to a different tablespace, you must move each partition using an ALTER TABLE tbl_name REORGANIZE PARTITION statement.

The following example demonstrates how to move table partitions to a different tablespace. INFORMATION_SCHEMA.INNODB_SYS_TABLES and INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES are queried to verify that partitions are placed in the expected tablespace.

Note

If the TABLESPACE = tablespace_name option is not defined in the REORGANIZE PARTITION statement, InnoDB moves the partition to the table's default tablespace. In this example, tablespace ts1, which is defined at the table level, is the default tablespace for table t1. Partition P3 is moved from the system tablespace to tablespace ts1 since no TABLESPACE option is specified in the ALTER TABLE t1 REORGANIZE PARTITION statement for partition P3.

To change a partitioned table's default tablespace, you can run ALTER TABLE tbl_name TABLESPACE [=] tablespace_name on the partitioned table.

mysql> CREATE TABLESPACE ts1 ADD DATAFILE 'ts1.ibd';
mysql> CREATE TABLESPACE ts2 ADD DATAFILE 'ts2.ibd';

mysql> CREATE TABLE t1 ( a INT NOT NULL, PRIMARY KEY (a))
    ->  ENGINE=InnoDB TABLESPACE ts1                          
    ->  PARTITION BY RANGE (a) PARTITIONS 3 (
    ->    PARTITION P1 VALUES LESS THAN (2),
    ->    PARTITION P2 VALUES LESS THAN (4) TABLESPACE `innodb_file_per_table`,
    ->    PARTITION P3 VALUES LESS THAN (6) TABLESPACE `innodb_system`);


mysql> SELECT A.NAME as partition_name, A.SPACE_TYPE as space_type, B.NAME as space_name
    -> FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES A
    -> LEFT JOIN INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES B
    -> ON A.SPACE = B.SPACE WHERE A.NAME LIKE '%t1%' ORDER BY A.NAME;
+----------------+------------+--------------+
| partition_name | space_type | space_name   |
+----------------+------------+--------------+
| test/t1#P#P1   | General    | ts1          |
| test/t1#P#P2   | Single     | test/t1#P#P2 |
| test/t1#P#P3   | System     | NULL         |
+----------------+------------+--------------+

mysql> ALTER TABLE t1 REORGANIZE PARTITION P1
    -> INTO (PARTITION P1 VALUES LESS THAN (2) TABLESPACE = `ts2`);
  
mysql> ALTER TABLE t1 REORGANIZE PARTITION P2
    -> INTO (PARTITION P2 VALUES LESS THAN (4) TABLESPACE = `ts2`);
  
mysql> ALTER TABLE t1 REORGANIZE PARTITION P3
    -> INTO (PARTITION P3 VALUES LESS THAN (6));

mysql> SELECT A.NAME AS partition_name, A.SPACE_TYPE AS space_type, B.NAME AS space_name
    -> FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES A
    -> LEFT JOIN INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES B
    -> ON A.SPACE = B.SPACE WHERE A.NAME LIKE '%t1%' ORDER BY A.NAME;
+----------------+------------+------------+
| partition_name | space_type | space_name |
+----------------+------------+------------+
| test/t1#P#P1   | General    | ts2        |
| test/t1#P#P2   | General    | ts2        |
| test/t1#P#P3   | General    | ts1        |
+----------------+------------+------------+

Dropping a General Tablespace

The DROP TABLESPACE statement is used to drop an InnoDB general tablespace.

All tables must be dropped from the tablespace prior to a DROP TABLESPACE operation. If the tablespace is not empty, DROP TABLESPACE returns an error.

If a DROP TABLESPACE operation on an empty general tablespace returns an error, the tablespace may contain an orphan temporary or intermediate table that was left by an ALTER TABLE operation that was interrupted by a server exit. For more information, see Section 15.21.3, “Troubleshooting InnoDB Data Dictionary Operations”.

A general InnoDB tablespace is not deleted automatically when the last table in the tablespace is dropped. The tablespace must be dropped explicitly using DROP TABLESPACE tablespace_name.

A general tablespace does not belong to any particular database. A DROP DATABASE operation can drop tables that belong to a general tablespace but it cannot drop the tablespace, even if the DROP DATABASE operation drops all tables that belong to the tablespace. A general tablespace must be dropped explicitly using DROP TABLESPACE tablespace_name.

Similar to the system tablespace, truncating or dropping tables stored in a general tablespace creates free space internally in the general tablespace .ibd data file which can only be used for new InnoDB data. Space is not released back to the operating system as it is when a file-per-table tablespace is deleted during a DROP TABLE operation.

This example demonstrates how to drop an InnoDB general tablespace. The general tablespace ts1 is created with a single table. The table must be dropped before dropping the tablespace.

mysql> CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' Engine=InnoDB;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts10 Engine=InnoDB;
Query OK, 0 rows affected (0.02 sec)

mysql> DROP TABLE t1;
Query OK, 0 rows affected (0.01 sec)

mysql> DROP TABLESPACE ts1;
Query OK, 0 rows affected (0.01 sec)
Note

tablespace_name is a case-sensitive identifier in MySQL.

General Tablespace Limitations

  • A generated or existing tablespace cannot be changed to a general tablespace.

  • Creation of temporary general tablespaces is not supported.

  • General tablespaces do not support temporary tables.

  • Tables stored in a general tablespace may only be opened in MySQL releases that support general tablespaces.

  • Similar to the system tablespace, truncating or dropping tables stored in a general tablespace creates free space internally in the general tablespace .ibd data file which can only be used for new InnoDB data. Space is not released back to the operating system as it is for file-per-table tablespaces.

    Additionally, a table-copying ALTER TABLE operation on table that resides in a shared tablespace (a general tablespace or the system tablespace) can increase the amount of space used by the tablespace. Such operations require as much additional space as the data in the table plus indexes. The additional space required for the table-copying ALTER TABLE operation is not released back to the operating system as it is for file-per-table tablespaces.

  • ALTER TABLE ... DISCARD TABLESPACE and ALTER TABLE ...IMPORT TABLESPACE are not supported for tables that belong to a general tablespace.

For more information see Section 14.1.19, “CREATE TABLESPACE Syntax”.

15.7.10 InnoDB Tablespace Encryption

InnoDB supports data encryption for InnoDB tables stored in file-per-table tablespaces. This feature provides at-rest encryption for physical tablespace data files.

InnoDB tablespace encryption uses a two tier encryption key architecture, consisting of a master encryption key and tablespace keys. When an InnoDB table is encrypted, a tablespace key is encrypted and stored in the tablespace header. When an application or authenticated user wants to access encrypted tablespace data, InnoDB uses a master encryption key to decrypt the tablespace key. The decrypted version of a tablespace key never changes, but the master encryption key may be changed as required. This action is referred to as master key rotation.

The InnoDB tablespace encryption feature relies on a keyring plugin for master encryption key management.

All MySQL editions provide a keyring_file plugin, which stores master encryption key data in a keyring file in the location specified by the keyring_file_data configuration option.

Warning

The InnoDB tablespace encryption feature in non-enterprise editions of MySQL uses the keyring_file plugin for encryption key management, which is not intended as a regulatory compliance solution. Security standards such as PCI, FIPS, and others require use of key management systems to secure, manage, and protect encryption keys in key vaults or hardware security modules (HSMs).

MySQL Enterprise Edition offers the keyring_okv plugin, which includes a KMIP client (KMIP v1.2) that works with Oracle Key Vault (OKV) to provide encryption key management. When InnoDB tablespace encryption uses OKV for encryption key management, the feature is referred to as MySQL Enterprise Transparent Data Encryption (TDE).

A secure and robust encryption key management solution such as OKV is critical for security and for compliance with various security standards. Among other benefits, using a key vault ensures that keys are stored securely, never lost, and only known to authorized key administrators. A key vault also maintains an encryption key history.

InnoDB tablespace encryption supports the Advanced Encryption Standard (AES) block-based encryption algorithm. It uses Electronic Codebook (ECB) block encryption mode for tablespace key encryption and Cipher Block Chaining (CBC) block encryption mode for data encryption.

For frequently asked questions about the InnoDB tablespace encryption feature, see Section A.16, “MySQL 5.7 FAQ: InnoDB Tablespace Encryption”.

InnoDB Tablespace Encryption Prerequisites

  • A keyring plugin (the keyring_file plugin or keyring_okv plugin) must be installed and configured. Keyring plugin installation is performed at startup using the --early-plugin-load option. Early loading ensures that the plugin is available prior to initialization of the InnoDB storage engine. For keyring plugin installation and configuration instructions, see Section 7.5.4, “The MySQL Keyring”.

    Only one keyring plugin should be enabled at a time. Enabling multiple keyring plugins is not supported.

    Important

    Once encrypted tables are created in a MySQL instance, the keyring plugin that was loaded when creating the encrypted tables must continue to be loaded using the --early-plugin-load option, prior to InnoDB initialization. Failing to do so results in errors on startup and during InnoDB recovery.

    To verify that a keyring plugin is active, use the SHOW PLUGINS statement or query the INFORMATION_SCHEMA.PLUGINS table. For example:

    mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS
        -> WHERE PLUGIN_NAME LIKE 'keyring%';
    +--------------+---------------+
    | PLUGIN_NAME  | PLUGIN_STATUS |
    +--------------+---------------+
    | keyring_file | ACTIVE        |
    +--------------+---------------+
    
  • The innodb_file_per_table option must be enabled (the default). InnoDB tablespace encryption only supports file-per-table tablespaces. Alternatively, you can specify the TABLESPACE='innodb_file_per_table' option when creating an encrypted table or altering an existing table to enable encryption.

  • Before using the InnoDB tablespace encryption feature with production data, ensure that you have taken steps to prevent loss of the master encryption key. If the master encryption key is lost, data stored in encrypted tablespace files is unrecoverable. If you are using the keyring_file plugin, it is recommended that you create a backup of the keyring file immediately after creating the first encrypted table and before and after master key rotation. The keyring file location is defined by the keyring_file_data configuration option. If you are using the keyring_okv plugin, ensure that you have performed the necessary keyring_okv plugin and Oracle Key Vault (OKV) configuration. For keyring plugin configuration, see Section 7.5.4, “The MySQL Keyring”. For OKV configuration, refer to the OKV documentation available at the Oracle Key Vault site.

Enabling and Disabling InnoDB Tablespace Encryption

To enable encryption for a new InnoDB table, specify the ENCRYPTION option in a CREATE TABLE statement.

mysql> CREATE TABLE t1 (c1 INT) ENCRYPTION='Y';

To enable encryption for an existing InnoDB table, specify the ENCRYPTION option in an ALTER TABLE statement.

mysql> ALTER TABLE t1 ENCRYPTION='Y';

To disable encryption for an InnoDB table, set ENCRYPTION='N' using ALTER TABLE.

mysql> ALTER TABLE t1 ENCRYPTION='N';
Note

Plan appropriately when altering an existing table with the ENCRYPTION option. ALTER TABLE ... ENCRYPTION operations rebuild the table using ALGORITHM=COPY. ALGORITM=INPLACE is not supported.

InnoDB Tablespace Encryption and Master Key Rotation

The master encryption key should be rotated periodically and whenever you suspect that the key may have been compromised.

Master key rotation is an atomic, instance-level operation. Each time the master encryption key is rotated, all tablespace keys in the MySQL instance are re-encrypted and saved back to their respective tablespace headers. As an atomic operation, re-encryption must succeed for all tablespace keys once a rotation operation is initiated. If master key rotation is interrupted by a server failure, InnoDB rolls the operation forward on server restart. For more information, see InnoDB Tablespace Encryption and Recovery.

Rotating the master encryption key only changes the master encryption key and re-encrypts tablespace keys. It does not decrypt or re-encrypt associated tablespace data.

Rotating the master encryption key requires the SUPER privilege.

To rotate the master encryption key, run:

mysql> ALTER INSTANCE ROTATE INNODB MASTER KEY;

ALTER INSTANCE ROTATE INNODB MASTER KEY supports concurrent DML. However, it cannot be run concurrently with CREATE TABLE ... ENCRYPTED or ALTER TABLE ... ENCRYPTED operations, and locks are taken to prevent conflicts that could arise from concurrent execution of these statements. If one of the conflicting statements is running, it must complete before another can proceed.

InnoDB Tablespace Encryption and Recovery

If a server failure occurs during master key rotation, InnoDB continues the operation on server restart.

The keyring plugin that was installed when tables were encrypted must be loaded prior to storage engine initialization so that the information necessary to decrypt tablespace data pages can be retrieved from tablespace headers before InnoDB initialization and recovery activities access tablespace data. (See InnoDB Tablespace Encryption Prerequisites.)

When InnoDB initialization and recovery begin, the master key rotation operation resumes. Due to the server failure, some tablespaces keys may already be encrypted using the new master encryption key. InnoDB reads the encryption data from each tablespace header, and if the data indicates that the tablespace key is encrypted using the old master encryption key, InnoDB retrieves the old key from the keyring and uses it to decrypt the tablepace key. InnoDB then re-encrypts the tablespace key using the new master encryption key and saves the re-encrypted tablespace key back to the tablespace header.

Exporting Encrypted Tables

When an encrypted table is exported, InnoDB generates a transfer key that is used to encrypt the tablespace key. The encrypted tablespace key and transfer key are stored in a tablespace_name.cfp file. This file together with the encrypted tablespace file is required to perform an import operation. On import, InnoDB uses the transfer key to decrypt the tablespace key in the tablespace_name.cfp file. For related information, see Section 15.7.6, “Copying File-Per-Table Tablespaces to Another Server”.

InnoDB Tablespace Encryption and Replication

Identifying Tables that Use InnoDB Tablespace Encryption

When the ENCRYPTION option is specified in a CREATE TABLE or ALTER TABLE statement, it is recorded in the CREATE_OPTIONS field of INFORMATION_SCHEMA.TABLES. This field may be queried to identify encrypted tables in a MySQL instance.

mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES
    -> WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%';
+--------------+------------+----------------+
| TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS |
+--------------+------------+----------------+
| test         | t1         | ENCRYPTION="Y" |
+--------------+------------+----------------+

InnoDB Tablespace Encryption Usage Notes

  • If the server exits or is stopped during normal operation, it is recommended to restart the server using the same encryption settings that were configured previously.

  • The first master encryption key is generated when the first new or existing table is encrypted.

  • Master key rotation re-encrypts tablespaces keys but does not change the tablespace key itself. To change a tablespace key, you must disable and re-enable table encryption using ALTER TABLE tbl_name ENCRYPTION, which is an ALGORITHM=COPY operation that rebuilds the table.

  • If a table is created with both the COMPRESSION and ENCRYPTION options, compression is performed before tablespace data is encrypted.

  • keyring_file plugin usage notes:

    • If a keyring file is empty or missing, the first execution of ALTER INSTANCE ROTATE INNODB MASTER KEY creates a master encryption key.

    • Uninstalling the keyring_file plugin does not remove an existing keyring file.

    • It is recommended that you not place the keyring file under the same directory as tablespace data files. The location of the keyring file is specified by the keyring_file_data option.

    • Modifying the keyring_file_data option at runtime or restarting the server with a new keyring_file_data setting can cause previously encrypted tables to become inaccessible, resulting in the loss of data.

InnoDB Tablespace Encryption Limitations

  • Advanced Encryption Standard (AES) is the only supported encryption algorithm. InnoDB tablespace encryption uses Electronic Codebook (ECB) block encryption mode for tablespace key encryption and Cipher Block Chaining (CBC) block encryption mode for data encryption.

  • Altering the ENCRYPTION attribute of a table is an ALGORITHM=COPY operation. ALGORITHM=INPLACE is not supported.

  • InnoDB tablespace encryption only supports InnoDB tables that are stored in a file-per-table tablespaces. Encryption is not supported for tables stored in other InnoDB tablespace types including general tablespaces, the system tablespace, undo log tablespaces, and the temporary tablespace.

  • You cannot move or copy an encrypted table from a file-per-table tablespace to an unsupported InnoDB tablespace type.

  • Tablespace encryption only applies to data in the tablespace. Data is not encrypted in the redo log, undo log, or binary log.

  • Direct migration from the keyring_file plugin to the keyring_okv plugin, or vice-versa, is currently unsupported. Changing keyring plugins requires decrypting tables, uninstalling the current keyring plugin, installing and configuring the other keyring plugin, and re-encrypting tables.

15.8 InnoDB Tables and Indexes

15.8.1 Creating InnoDB Tables

To create an InnoDB table, use the CREATE TABLE statement. You do not need to specify the ENGINE=InnoDB clause if InnoDB is defined as the default storage engine, which is the default as of MySQL 5.5. You might still use ENGINE=InnoDB clause if you plan to use mysqldump or replication to replay the CREATE TABLE statement on a server where the default storage engine is not InnoDB.

-- Default storage engine = InnoDB.
CREATE TABLE t1 (a INT, b CHAR (20), PRIMARY KEY (a));
-- Backward-compatible with older MySQL.
CREATE TABLE t2 (a INT, b CHAR (20), PRIMARY KEY (a)) ENGINE=InnoDB;

An InnoDB table and its indexes can be created in the system tablespace, in a file-per-table tablespace, or in a general tablespace. When innodb_file_per_table is enabled, which is the default setting as of MySQL 5.6.6, an InnoDB table is implicitly created in an individual file-per-table tablespace. Conversely, when innodb_file_per_table is disabled, an InnoDB table is implicitly created in the system tablespace. With general tablespaces, you can use CREATE TABLE ... TABLESPACE syntax to explicitly create an InnoDB table in any of the three tablespace types.

When you create an InnoDB table, MySQL creates a .frm file in a database directory under the MySQL data directory. For a table created in a file-per-table tablespace, an .ibd file is also created. A table created in the system tablespace is created in the existing system tablespace ibdata files. A table created in a general tablespace is created in an existing general tablespace .ibd file.

Internally, InnoDB adds an entry for each table to the InnoDB data dictionary. The entry includes the database name. For example, if table t1 is created in the test database, the data dictionary entry is 'test/t1'. This means you can create a table of the same name (t1) in a different database, and the table names do not collide inside InnoDB.

Viewing the Properties of InnoDB Tables

To view the properties of InnoDB tables, issue a SHOW TABLE STATUS statement:

mysql > SHOW TABLE STATUS FROM test LIKE 't%' \G;
*************************** 1. row ***************************
           Name: t1
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 0
 Avg_row_length: 0
    Data_length: 16384
Max_data_length: 0
   Index_length: 0
      Data_free: 0
 Auto_increment: NULL
    Create_time: 2015-03-16 15:13:31
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options:
        Comment:
1 row in set (0.00 sec)

In the status output, you see the Row format property of table t1 is Compact. The Dynamic or Compressed row format is required take advantage of InnoDB features such as table compression and off-page storage for long column values. To use these row formats, you can enable innodb_file_per_table (the default as of MySQL 5.6.6) and set innodb_file_format to Barracuda, which implicitly creates InnoDB tables in file-per-table tablespaces:

SET GLOBAL innodb_file_per_table=1;
SET GLOBAL innodb_file_format=barracuda;
CREATE TABLE t3 (a INT, b CHAR (20), PRIMARY KEY (a)) ROW_FORMAT=DYNAMIC;
CREATE TABLE t4 (a INT, b CHAR (20), PRIMARY KEY (a)) ROW_FORMAT=COMPRESSED;

Or, you can use CREATE TABLE ... TABLESPACE syntax to create an InnoDB table in a general tablespace. General tablespaces support all row formats. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=DYNAMIC;

CREATE TABLE ... TABLESPACE syntax can also be used to create InnoDB tables with a Dynamic row format in the system tablespace, along side tables with a Compact or Redundant row format.

CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE = innodb_system ROW_FORMAT=DYNAMIC;

InnoDB table properties may also be queried using the InnoDB Information Schema system tables:

SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE NAME='test/t1' \G
*************************** 1. row ***************************
     TABLE_ID: 45
         NAME: test/t1
         FLAG: 1
       N_COLS: 5
        SPACE: 35
  FILE_FORMAT: Antelope
   ROW_FORMAT: Compact
ZIP_PAGE_SIZE: 0
   SPACE_TYPE: Single
1 row in set (0.00 sec)

Defining a Primary Key for InnoDB Tables

Always set up a primary key for each InnoDB table, specifying the column or columns that:

  • Are referenced by the most important queries.

  • Are never left blank.

  • Never have duplicate values.

  • Rarely if ever change value once inserted.

For example, in a table containing information about people, you would not create a primary key on (firstname, lastname) because more than one person can have the same name, some people have blank last names, and sometimes people change their names. With so many constraints, often there is not an obvious set of columns to use as a primary key, so you create a new column with a numeric ID to serve as all or part of the primary key. You can declare an auto-increment column so that ascending values are filled in automatically as rows are inserted:

-- The value of ID can act like a pointer between related items in different tables.
CREATE TABLE t5 (id INT AUTO_INCREMENT, b CHAR (20), PRIMARY KEY (id));
-- The primary key can consist of more than one column. Any autoinc column must come first.
CREATE TABLE t6 (id INT AUTO_INCREMENT, a INT, b CHAR (20), PRIMARY KEY (id,a));

Although the table works correctly without defining a primary key, the primary key is involved with many aspects of performance and is a crucial design aspect for any large or frequently used table. It is recommended that you always specify a primary key in the CREATE TABLE statement. If you create the table, load data, and then run ALTER TABLE to add a primary key later, that operation is much slower than defining the primary key when creating the table.

15.8.2 Role of the .frm File for InnoDB Tables

MySQL stores its data dictionary information for tables in .frm files in database directories. Unlike other MySQL storage engines, InnoDB also encodes information about the table in its own internal data dictionary inside the tablespace. When MySQL drops a table or a database, it deletes one or more .frm files as well as the corresponding entries inside the InnoDB data dictionary. You cannot move InnoDB tables between databases simply by moving the .frm files.

15.8.3 Physical Row Structure of InnoDB Tables

The physical row structure of an InnoDB table depends on the row format specified when the table is created. If a row format is not specified, the default row format is used. The innodb_file_format default is Barracuda and the default row format is defined by the innodb_default_row_format configuration option, which has a default value of DYNAMIC.

The REDUNDANT format is available to retain compatibility with older versions of MySQL.

To check the row format of an InnoDB table, you can use SHOW TABLE STATUS. For example:

mysql> SHOW TABLE STATUS IN test1\G
*************************** 1. row ***************************
           Name: t1
         Engine: InnoDB
        Version: 10
     Row_format: Dynamic
           Rows: 0
 Avg_row_length: 0
    Data_length: 16384
Max_data_length: 0
   Index_length: 16384
      Data_free: 0
 Auto_increment: 1
    Create_time: 2016-09-14 16:29:38
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: 
        Comment: 

You can also check the row format of an InnoDB table by querying INFORMATION_SCHEMA.INNODB_SYS_TABLES.

mysql> SELECT NAME, ROW_FORMAT FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE NAME='test1/t1';
+----------+------------+
| NAME     | ROW_FORMAT |
+----------+------------+
| test1/t1 | Dynamic    |
+----------+------------+

Redundant Row Format Characteristics

Rows in InnoDB tables that use REDUNDANT row format have the following characteristics:

  • Each index record contains a 6-byte header. The header is used to link together consecutive records, and also in row-level locking.

  • Records in the clustered index contain fields for all user-defined columns. In addition, there is a 6-byte transaction ID field and a 7-byte roll pointer field.

  • If no primary key was defined for a table, each clustered index record also contains a 6-byte row ID field.

  • Each secondary index record also contains all the primary key fields defined for the clustered index key that are not in the secondary index.

  • A record contains a pointer to each field of the record. If the total length of the fields in a record is less than 128 bytes, the pointer is one byte; otherwise, two bytes. The array of these pointers is called the record directory. The area where these pointers point is called the data part of the record.

  • Internally, InnoDB stores fixed-length character columns such as CHAR(10) in a fixed-length format. InnoDB does not truncate trailing spaces from VARCHAR columns.

  • InnoDB encodes fixed-length fields greater than or equal to 768 bytes in length as variable-length fields, which can be stored off-page. For example, a CHAR(255) column can exceed 768 bytes if the maximum byte length of the character set is greater than 3, as it is with utf8mb4.

  • An SQL NULL value reserves one or two bytes in the record directory. Besides that, an SQL NULL value reserves zero bytes in the data part of the record if stored in a variable length column. In a fixed-length column, it reserves the fixed length of the column in the data part of the record. Reserving the fixed space for NULL values enables an update of the column from NULL to a non-NULL value to be done in place without causing fragmentation of the index page.

COMPACT Row Format Characteristics

The COMPACT row format decreases row storage space by about 20% compared to the REDUNDANT format at the cost of increasing CPU use for some operations. If your workload is a typical one that is limited by cache hit rates and disk speed, COMPACT format is likely to be faster. If the workload is a rare case that is limited by CPU speed, compact format might be slower.

Rows in InnoDB tables that use COMPACT row format have the following characteristics:

  • Each index record contains a 5-byte header that may be preceded by a variable-length header. The header is used to link together consecutive records, and also in row-level locking.

  • The variable-length part of the record header contains a bit vector for indicating NULL columns. If the number of columns in the index that can be NULL is N, the bit vector occupies CEILING(N/8) bytes. (For example, if there are anywhere from 9 to 15 columns that can be NULL, the bit vector uses two bytes.) Columns that are NULL do not occupy space other than the bit in this vector. The variable-length part of the header also contains the lengths of variable-length columns. Each length takes one or two bytes, depending on the maximum length of the column. If all columns in the index are NOT NULL and have a fixed length, the record header has no variable-length part.

  • For each non-NULL variable-length field, the record header contains the length of the column in one or two bytes. Two bytes are only needed if part of the column is stored externally in overflow pages or the maximum length exceeds 255 bytes and the actual length exceeds 127 bytes. For an externally stored column, the 2-byte length indicates the length of the internally stored part plus the 20-byte pointer to the externally stored part. The internal part is 768 bytes, so the length is 768+20. The 20-byte pointer stores the true length of the column.

  • The record header is followed by the data contents of the non-NULL columns.

  • Records in the clustered index contain fields for all user-defined columns. In addition, there is a 6-byte transaction ID field and a 7-byte roll pointer field.

  • If no primary key was defined for a table, each clustered index record also contains a 6-byte row ID field.

  • Each secondary index record also contains all the primary key fields defined for the clustered index key that are not in the secondary index. If any of these primary key fields are variable length, the record header for each secondary index has a variable-length part to record their lengths, even if the secondary index is defined on fixed-length columns.

  • Internally, for nonvariable-length character sets, InnoDB stores fixed-length character columns such as CHAR(10) in a fixed-length format.

    InnoDB does not truncate trailing spaces from VARCHAR columns.

  • Internally, for variable-length character sets such as utf8mb3 and utf8mb4, InnoDB attempts to store CHAR(N) in N bytes by trimming trailing spaces. If the byte length of a CHAR(N) column value exceeds N bytes, InnoDB trims trailing spaces to a minimum of the column value byte length. The maximum length of a CHAR(N) column is the maximum character byte length × N.

    InnoDB reserves a minimum of N bytes for CHAR(N). Reserving the minimum space N in many cases enables column updates to be done in place without causing fragmentation of the index page. By comparison, for ROW_FORMAT=REDUNDANT, CHAR(N) columns occupy the maximum character byte length × N.

    InnoDB encodes fixed-length fields greater than or equal to 768 bytes in length as variable-length fields, which can be stored off-page. For example, a CHAR(255) column can exceed 768 bytes if the maximum byte length of the character set is greater than 3, as it is with utf8mb4.

    ROW_FORMAT=DYNAMIC and ROW_FORMAT=COMPRESSED handle CHAR storage in the same way as ROW_FORMAT=COMPACT.

DYNAMIC and COMPRESSED Row Formats

DYNAMIC and COMPRESSED row formats are variations of the COMPACT row format. For information about these row formats, see Section 15.11.3, “DYNAMIC and COMPRESSED Row Formats”.

15.8.4 Moving or Copying InnoDB Tables to Another Machine

This section describes techniques for moving or copying some or all InnoDB tables to a different server. For example, you might move an entire MySQL instance to a larger, faster server; you might clone an entire MySQL instance to a new replication slave server; you might copy individual tables to another server to develop and test an application, or to a data warehouse server to produce reports.

On Windows, InnoDB always stores database and table names internally in lowercase. To move databases in a binary format from Unix to Windows or from Windows to Unix, create all databases and tables using lowercase names. A convenient way to accomplish this is to add the following line to the [mysqld] section of your my.cnf or my.ini file before creating any databases or tables:

[mysqld]
lower_case_table_names=1

Techniques for moving or copying InnoDB tables include:

Transportable Tablespaces

Introduced in MySQL 5.6.6, the transportable tablespaces feature uses FLUSH TABLES ... FOR EXPORT to ready InnoDB tables for copying from one server instance to another. To use this feature, InnoDB tables must be created with innodb_file_per_table set to ON so that each InnoDB table has its own tablespace. For usage information, see Section 15.7.6, “Copying File-Per-Table Tablespaces to Another Server”.

MySQL Enterprise Backup

The MySQL Enterprise Backup product lets you back up a running MySQL database, including InnoDB and MyISAM tables, with minimal disruption to operations while producing a consistent snapshot of the database. When MySQL Enterprise Backup is copying InnoDB tables, reads and writes to both InnoDB and MyISAM tables can continue. During the copying of MyISAM and other non-InnoDB tables, reads (but not writes) to those tables are permitted. In addition, MySQL Enterprise Backup can create compressed backup files, and back up subsets of InnoDB tables. In conjunction with the MySQL binary log, you can perform point-in-time recovery. MySQL Enterprise Backup is included as part of the MySQL Enterprise subscription.

For more details about MySQL Enterprise Backup, see Section 29.2, “MySQL Enterprise Backup Overview”.

Copying Data Files (Cold Backup Method)

You can move an InnoDB database simply by copying all the relevant files listed under "Cold Backups" in Section 15.18.1, “InnoDB Backup”.

Like MyISAM data files, InnoDB data and log files are binary-compatible on all platforms having the same floating-point number format. If the floating-point formats differ but you have not used FLOAT or DOUBLE data types in your tables, then the procedure is the same: simply copy the relevant files.

When you move or copy file-per-table .ibd files, the database directory name must be the same on the source and destination systems. The table definition stored in the InnoDB shared tablespace includes the database name. The transaction IDs and log sequence numbers stored in the tablespace files also differ between databases.

To move an .ibd file and the associated table from one database to another, use a RENAME TABLE statement:

RENAME TABLE db1.tbl_name TO db2.tbl_name;

If you have a clean backup of an .ibd file, you can restore it to the MySQL installation from which it originated as follows:

  1. The table must not have been dropped or truncated since you copied the .ibd file, because doing so changes the table ID stored inside the tablespace.

  2. Issue this ALTER TABLE statement to delete the current .ibd file:

    ALTER TABLE tbl_name DISCARD TABLESPACE;
    
  3. Copy the backup .ibd file to the proper database directory.

  4. Issue this ALTER TABLE statement to tell InnoDB to use the new .ibd file for the table:

    ALTER TABLE tbl_name IMPORT TABLESPACE;
    
    Note

    The ALTER TABLE ... IMPORT TABLESPACE feature does not enforce foreign key constraints on imported data.

In this context, a clean .ibd file backup is one for which the following requirements are satisfied:

  • There are no uncommitted modifications by transactions in the .ibd file.

  • There are no unmerged insert buffer entries in the .ibd file.

  • Purge has removed all delete-marked index records from the .ibd file.

  • mysqld has flushed all modified pages of the .ibd file from the buffer pool to the file.

You can make a clean backup .ibd file using the following method:

  1. Stop all activity from the mysqld server and commit all transactions.

  2. Wait until SHOW ENGINE INNODB STATUS shows that there are no active transactions in the database, and the main thread status of InnoDB is Waiting for server activity. Then you can make a copy of the .ibd file.

Another method for making a clean copy of an .ibd file is to use the MySQL Enterprise Backup product:

  1. Use MySQL Enterprise Backup to back up the InnoDB installation.

  2. Start a second mysqld server on the backup and let it clean up the .ibd files in the backup.

Export and Import (mysqldump)

You can use mysqldump to dump your tables on one machine and then import the dump files on the other machine. Using this method, it does not matter whether the formats differ or if your tables contain floating-point data.

One way to increase the performance of this method is to switch off autocommit mode when importing data, assuming that the tablespace has enough space for the big rollback segment that the import transactions generate. Do the commit only after importing a whole table or a segment of a table.

15.8.5 Converting Tables from MyISAM to InnoDB

If you have MyISAM tables that you want to convert to InnoDB for better reliability and scalability, review the following guidelines and tips before making the conversion.

Reduce Memory Usage for MyISAM, Increase Memory Usage for InnoDB

As you transition away from MyISAM tables, lower the value of the key_buffer_size configuration option to free memory no longer needed for caching results. Increase the value of the innodb_buffer_pool_size configuration option, which performs a similar role of allocating cache memory for InnoDB tables. The InnoDB buffer pool caches both table data and index data, so it does double duty in speeding up lookups for queries and keeping query results in memory for reuse. For guidance regarding buffer pool size configuration, see Section 9.12.4.1, “How MySQL Uses Memory”.

On a busy server, run benchmarks with the Query Cache turned off. The InnoDB buffer pool provides similar benefits, so the Query Cache might be tying up memory unnecessarily.

Watch Out for Too-Long Or Too-Short Transactions

Because MyISAM tables do not support transactions, you might not have paid much attention to the autocommit configuration option and the COMMIT and ROLLBACK statements. These keywords are important to allow multiple sessions to read and write InnoDB tables concurrently, providing substantial scalability benefits in write-heavy workloads.

While a transaction is open, the system keeps a snapshot of the data as seen at the beginning of the transaction, which can cause substantial overhead if the system inserts, updates, and deletes millions of rows while a stray transaction keeps running. Thus, take care to avoid transactions that run for too long:

  • If you are using a mysql session for interactive experiments, always COMMIT (to finalize the changes) or ROLLBACK (to undo the changes) when finished. Close down interactive sessions rather than leaving them open for long periods, to avoid keeping transactions open for long periods by accident.

  • Make sure that any error handlers in your application also ROLLBACK incomplete changes or COMMIT completed changes.

  • ROLLBACK is a relatively expensive operation, because INSERT, UPDATE, and DELETE operations are written to InnoDB tables prior to the COMMIT, with the expectation that most changes are committed successfully and rollbacks are rare. When experimenting with large volumes of data, avoid making changes to large numbers of rows and then rolling back those changes.

  • When loading large volumes of data with a sequence of INSERT statements, periodically COMMIT the results to avoid having transactions that last for hours. In typical load operations for data warehousing, if something goes wrong, you truncate the table (using TRUNCATE TABLE) and start over from the beginning rather than doing a ROLLBACK.

The preceding tips save memory and disk space that can be wasted during too-long transactions. When transactions are shorter than they should be, the problem is excessive I/O. With each COMMIT, MySQL makes sure each change is safely recorded to disk, which involves some I/O.

  • For most operations on InnoDB tables, you should use the setting autocommit=0. From an efficiency perspective, this avoids unnecessary I/O when you issue large numbers of consecutive INSERT, UPDATE, or DELETE statements. From a safety perspective, this allows you to issue a ROLLBACK statement to recover lost or garbled data if you make a mistake on the mysql command line, or in an exception handler in your application.

  • The time when autocommit=1 is suitable for InnoDB tables is when running a sequence of queries for generating reports or analyzing statistics. In this situation, there is no I/O penalty related to COMMIT or ROLLBACK, and InnoDB can automatically optimize the read-only workload.

  • If you make a series of related changes, finalize all those changes at once with a single COMMIT at the end. For example, if you insert related pieces of information into several tables, do a single COMMIT after making all the changes. Or if you run many consecutive INSERT statements, do a single COMMIT after all the data is loaded; if you are doing millions of INSERT statements, perhaps split up the huge transaction by issuing a COMMIT every ten thousand or hundred thousand records, so the transaction does not grow too large.

  • Remember that even a SELECT statement opens a transaction, so after running some report or debugging queries in an interactive mysql session, either issue a COMMIT or close the mysql session.

Don't Worry Too Much About Deadlocks

You might see warning messages referring to deadlocks in the MySQL error log, or the output of SHOW ENGINE INNODB STATUS. Despite the scary-sounding name, a deadlock is not a serious issue for InnoDB tables, and often does not require any corrective action. When two transactions start modifying multiple tables, accessing the tables in a different order, they can reach a state where each transaction is waiting for the other and neither can proceed. When deadlock detection is enabled (the default), MySQL immediately detects this condition and cancels (rolls back) the smaller transaction, allowing the other to proceed. If deadlock detection is disabled using the innodb_deadlock_detect configuration option, InnoDB relies on the innodb_lock_wait_timeout setting to roll back transactions in case of a deadlock.

Either way, your applications need error-handling logic to restart a transaction that is forcibly cancelled due to a deadlock. When you re-issue the same SQL statements as before, the original timing issue no longer applies: either the other transaction has already finished and yours can proceed, or the other transaction is still in progress and your transaction waits until it finishes.

If deadlock warnings occur constantly, you might review the application code to reorder the SQL operations in a consistent way, or to shorten the transactions. You can test with the innodb_print_all_deadlocks option enabled to see all deadlock warnings in the MySQL error log, rather than only the last warning in the SHOW ENGINE INNODB STATUS output.

For more information, see Section 15.5.5, “Deadlocks in InnoDB”.

Plan the Storage Layout

To get the best performance from InnoDB tables, you can adjust a number of parameters related to storage layout.

When you convert MyISAM tables that are large, frequently accessed, and hold vital data, investigate and consider the innodb_file_per_table, innodb_file_format, and innodb_page_size configuration options, and the ROW_FORMAT and KEY_BLOCK_SIZE clauses of the CREATE TABLE statement.

During your initial experiments, the most important setting is innodb_file_per_table. When this setting is enabled, which is the default as of MySQL 5.6.6, new InnoDB tables are implicitly created in file-per-table tablespaces. In contrast with the InnoDB system tablespace, file-per-table tablespaces allow disk space to be reclaimed by the operating system when a table is truncated or dropped. File-per-table tablespaces also support the Barracuda file format and associated features such as table compression and off-page storage for long variable-length columns. For more information, see Section 15.7.4, “InnoDB File-Per-Table Tablespaces”.

You can also store InnoDB tables in a shared general tablespace. General tablespaces support the Barracuda file format and can contain multiple tables. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

Converting an Existing Table

To convert a non-InnoDB table to use InnoDB use ALTER TABLE:

ALTER TABLE table_name ENGINE=InnoDB;
Important

Do not convert MySQL system tables in the mysql database from MyISAM to the InnoDB type. This is an unsupported operation. type.

Cloning the Structure of a Table

You might make an InnoDB table that is a clone of a MyISAM table, rather than doing the ALTER TABLE conversion, to test the old and new table side-by-side before switching.

Create an empty InnoDB table with identical column and index definitions. Use show create table table_name\G to see the full CREATE TABLE statement to use. Change the ENGINE clause to ENGINE=INNODB.

Transferring Existing Data

To transfer a large volume of data into an empty InnoDB table created as shown in the previous section, insert the rows with INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns.

You can also create the indexes for the InnoDB table after inserting the data. Historically, creating new secondary indexes was a slow operation for InnoDB, but now you can create the indexes after the data is loaded with relatively little overhead from the index creation step.

If you have UNIQUE constraints on secondary keys, you can speed up a table import by turning off the uniqueness checks temporarily during the import operation:

SET unique_checks=0;
... import operation ...
SET unique_checks=1;

For big tables, this saves disk I/O because InnoDB can use its change buffer to write secondary index records as a batch. Be certain that the data contains no duplicate keys. unique_checks permits but does not require storage engines to ignore duplicate keys.

To get better control over the insertion process, you might insert big tables in pieces:

INSERT INTO newtable SELECT * FROM oldtable
   WHERE yourkey > something AND yourkey <= somethingelse;

After all records have been inserted, you can rename the tables.

During the conversion of big tables, increase the size of the InnoDB buffer pool to reduce disk I/O, to a maximum of 80% of physical memory. You can also increase the sizes of the InnoDB log files.

Storage Requirements

If you intend to make several temporary copies of your data in InnoDB tables during the conversion process, it is recommended that you create the tables in file-per-table tablespaces so that you can reclaim the disk space when you drop the tables. As mentioned previously, when the innodb_file_per_table option is enabled (the default), newly created InnoDB tables are implicitly created in file-per-table tablespaces.

Whether you convert the MyISAM table directly or create a cloned InnoDB table, make sure that you have sufficient disk space to hold both the old and new tables during the process. InnoDB tables require more disk space than MyISAM tables. If an ALTER TABLE operation runs out of space, it starts a rollback, and that can take hours if it is disk-bound. For inserts, InnoDB uses the insert buffer to merge secondary index records to indexes in batches. That saves a lot of disk I/O. For rollback, no such mechanism is used, and the rollback can take 30 times longer than the insertion.

In the case of a runaway rollback, if you do not have valuable data in your database, it may be advisable to kill the database process rather than wait for millions of disk I/O operations to complete. For the complete procedure, see Section 15.21.2, “Forcing InnoDB Recovery”.

Carefully Choose a PRIMARY KEY for Each Table

The PRIMARY KEY clause is a critical factor affecting the performance of MySQL queries and the space usage for tables and indexes. Perhaps you have phoned a financial institution where you are asked for an account number. If you do not have the number, you are asked for a dozen different pieces of information to uniquely identify yourself. The primary key is like that unique account number that lets you get straight down to business when querying or modifying the information in a table. Every row in the table must have a primary key value, and no two rows can have the same primary key value.

Here are guidelines for the primary key, followed by more detailed explanations.

  • Declare a PRIMARY KEY for each table. Typically, it is the most important column that you refer to in WHERE clauses when looking up a single row.

  • Declare the PRIMARY KEY clause in the original CREATE TABLE statement, rather than adding it later through an ALTER TABLE statement.

  • Choose the column and its data type carefully. Prefer numeric columns over character or string ones.

  • Consider using an auto-increment column if there is not another stable, unique, non-null, numeric column to use.

  • An auto-increment column is also a good choice if there is any doubt whether the value of the primary key column could ever change. Changing the value of a primary key column is an expensive operation, possibly involving rearranging data within the table and within each secondary index.

Consider adding a primary key to any table that does not already have one. Use the smallest practical numeric type based on the maximum projected size of the table. This can make each row slightly more compact, which can yield substantial space savings for large tables. The space savings are multiplied if the table has any secondary indexes, because the primary key value is repeated in each secondary index entry. In addition to reducing data size on disk, a small primary key also lets more data fit into the buffer pool, speeding up all kinds of operations and improving concurrency.

If the table already has a primary key on some longer column, such as a VARCHAR, consider adding a new unsigned AUTO_INCREMENT column and switching the primary key to that, even if that column is not referenced in queries. This design change can produce substantial space savings in the secondary indexes. You can designate the former primary key columns as UNIQUE NOT NULL to enforce the same constraints as the PRIMARY KEY clause, that is, to prevent duplicate or null values across all those columns.

If you spread related information across multiple tables, typically each table uses the same column for its primary key. For example, a personnel database might have several tables, each with a primary key of employee number. A sales database might have some tables with a primary key of customer number, and other tables with a primary key of order number. Because lookups using the primary key are very fast, you can construct efficient join queries for such tables.

If you leave the PRIMARY KEY clause out entirely, MySQL creates an invisible one for you. It is a 6-byte value that might be longer than you need, thus wasting space. Because it is hidden, you cannot refer to it in queries.

Application Performance Considerations

The extra reliability and scalability features of InnoDB do require more disk storage than equivalent MyISAM tables. You might change the column and index definitions slightly, for better space utilization, reduced I/O and memory consumption when processing result sets, and better query optimization plans making efficient use of index lookups.

If you do set up a numeric ID column for the primary key, use that value to cross-reference with related values in any other tables, particularly for join queries. For example, rather than accepting a country name as input and doing queries searching for the same name, do one lookup to determine the country ID, then do other queries (or a single join query) to look up relevant information across several tables. Rather than storing a customer or catalog item number as a string of digits, potentially using up several bytes, convert it to a numeric ID for storing and querying. A 4-byte unsigned INT column can index over 4 billion items (with the US meaning of billion: 1000 million). For the ranges of the different integer types, see Section 12.2.1, “Integer Types (Exact Value) - INTEGER, INT, SMALLINT, TINYINT, MEDIUMINT, BIGINT”.

Understand Files Associated with InnoDB Tables

InnoDB files require more care and planning than MyISAM files do:

  • You must not delete the ibdata files that represent the InnoDB system tablespace.

  • Copying InnoDB tables from one server to another requires issuing the FLUSH TABLES ... FOR EXPORT statement first, and copying the table_name.cfg file along with the table_name.ibd file.

15.8.6 AUTO_INCREMENT Handling in InnoDB

InnoDB provides a configurable locking mechanism that can significantly improve scalability and performance of SQL statements that add rows to tables with AUTO_INCREMENT columns. To use the AUTO_INCREMENT mechanism with an InnoDB table, an AUTO_INCREMENT column must be defined as part of an index such that it is possible to perform the equivalent of an indexed SELECT MAX(ai_col) lookup on the table to obtain the maximum column value. Typically, this is achieved by making the column the first column of some table index.

This section describes the behavior of AUTO_INCREMENT lock modes, usage implications for different AUTO_INCREMENT lock mode settings, and how InnoDB initializes the AUTO_INCREMENT counter.

InnoDB AUTO_INCREMENT Lock Modes

This section describes the behavior of AUTO_INCREMENT lock modes used to generate auto-increment values, and how each lock mode affects replication. Auto-increment lock modes are configured at startup using the innodb_autoinc_lock_mode configuration parameter.

The following terms are used in describing innodb_autoinc_lock_mode settings:

  • INSERT-like statements

    All statements that generate new rows in a table, including INSERT, INSERT ... SELECT, REPLACE, REPLACE ... SELECT, and LOAD DATA. Includes simple-inserts, bulk-inserts, and mixed-mode inserts.

  • Simple inserts

    Statements for which the number of rows to be inserted can be determined in advance (when the statement is initially processed). This includes single-row and multiple-row INSERT and REPLACE statements that do not have a nested subquery, but not INSERT ... ON DUPLICATE KEY UPDATE.

  • Bulk inserts

    Statements for which the number of rows to be inserted (and the number of required auto-increment values) is not known in advance. This includes INSERT ... SELECT, REPLACE ... SELECT, and LOAD DATA statements, but not plain INSERT. InnoDB assigns new values for the AUTO_INCREMENT column one at a time as each row is processed.

  • Mixed-mode inserts

    These are simple insert statements that specify the auto-increment value for some (but not all) of the new rows. An example follows, where c1 is an AUTO_INCREMENT column of table t1:

    INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');
    

    Another type of mixed-mode insert is INSERT ... ON DUPLICATE KEY UPDATE, which in the worst case is in effect an INSERT followed by a UPDATE, where the allocated value for the AUTO_INCREMENT column may or may not be used during the update phase.

There are three possible settings for the innodb_autoinc_lock_mode configuration parameter. The settings are 0, 1, or 2, for traditional, consecutive, or interleaved lock mode, respectively.

  • innodb_autoinc_lock_mode = 0 (traditional lock mode)

    The traditional lock mode provides the same behavior that existed before the innodb_autoinc_lock_mode configuration parameter was introduced in MySQL 5.1. The traditional lock mode option is provided for backward compatibility, performance testing, and working around issues with “mixed-mode inserts”, due to possible differences in semantics.

    In this lock mode, all INSERT-like statements obtain a special table-level AUTO-INC lock for inserts into tables with AUTO_INCREMENT columns. This lock is normally held to the end of the statement (not to the end of the transaction) to ensure that auto-increment values are assigned in a predictable and repeatable order for a given sequence of INSERT statements, and to ensure that auto-increment values assigned by any given statement are consecutive.

    In the case of statement-based replication, this means that when an SQL statement is replicated on a slave server, the same values are used for the auto-increment column as on the master server. The result of execution of multiple INSERT statements is deterministic, and the slave reproduces the same data as on the master. If auto-increment values generated by multiple INSERT statements were interleaved, the result of two concurrent INSERT statements would be nondeterministic, and could not reliably be propagated to a slave server using statement-based replication.

    To make this clear, consider an example that uses this table:

    CREATE TABLE t1 (
      c1 INT(11) NOT NULL AUTO_INCREMENT,
      c2 VARCHAR(10) DEFAULT NULL,
      PRIMARY KEY (c1)
    ) ENGINE=InnoDB;
    

    Suppose that there are two transactions running, each inserting rows into a table with an AUTO_INCREMENT column. One transaction is using an INSERT ... SELECT statement that inserts 1000 rows, and another is using a simple INSERT statement that inserts one row:

    Tx1: INSERT INTO t1 (c2) SELECT 1000 rows from another table ...
    Tx2: INSERT INTO t1 (c2) VALUES ('xxx');
    

    InnoDB cannot tell in advance how many rows are retrieved from the SELECT in the INSERT statement in Tx1, and it assigns the auto-increment values one at a time as the statement proceeds. With a table-level lock, held to the end of the statement, only one INSERT statement referring to table t1 can execute at a time, and the generation of auto-increment numbers by different statements is not interleaved. The auto-increment value generated by the Tx1 INSERT ... SELECT statement is consecutive, and the (single) auto-increment value used by the INSERT statement in Tx2 is either be smaller or larger than all those used for Tx1, depending on which statement executes first.

    As long as the SQL statements execute in the same order when replayed from the binary log (when using statement-based replication, or in recovery scenarios), the results are the same as they were when Tx1 and Tx2 first ran. Thus, table-level locks held until the end of a statement make INSERT statements using auto-increment safe for use with statement-based replication. However, those table-level locks limit concurrency and scalability when multiple transactions are executing insert statements at the same time.

    In the preceding example, if there were no table-level lock, the value of the auto-increment column used for the INSERT in Tx2 depends on precisely when the statement executes. If the INSERT of Tx2 executes while the INSERT of Tx1 is running (rather than before it starts or after it completes), the specific auto-increment values assigned by the two INSERT statements are nondeterministic, and may vary from run to run.

    Under the consecutive lock mode, InnoDB can avoid using table-level AUTO-INC locks for simple insert statements where the number of rows is known in advance, and still preserve deterministic execution and safety for statement-based replication.

    If you are not using the binary log to replay SQL statements as part of recovery or replication, the interleaved lock mode can be used to eliminate all use of table-level AUTO-INC locks for even greater concurrency and performance, at the cost of permitting gaps in auto-increment numbers assigned by a statement and potentially having the numbers assigned by concurrently executing statements interleaved.

  • innodb_autoinc_lock_mode = 1 (consecutive lock mode)

    This is the default lock mode. In this mode, bulk inserts use the special AUTO-INC table-level lock and hold it until the end of the statement. This applies to all INSERT ... SELECT, REPLACE ... SELECT, and LOAD DATA statements. Only one statement holding the AUTO-INC lock can execute at a time.

    Simple inserts (for which the number of rows to be inserted is known in advance) avoid table-level AUTO-INC locks by obtaining the required number of auto-increment values under the control of a mutex (a light-weight lock) that is only held for the duration of the allocation process, not until the statement completes. No table-level AUTO-INC lock is used unless an AUTO-INC lock is held by another transaction. If another transaction holds an AUTO-INC lock, a simple insert waits for the AUTO-INC lock, as if it were a bulk insert.

    This lock mode ensures that, in the presence of INSERT statements where the number of rows is not known in advance (and where auto-increment numbers are assigned as the statement progresses), all auto-increment values assigned by any INSERT-like statement are consecutive, and operations are safe for statement-based replication.

    Simply put, this lock mode significantly improves scalability while being safe for use with statement-based replication. Further, as with traditional lock mode, auto-increment numbers assigned by any given statement are consecutive. There is no change in semantics compared to traditional mode for any statement that uses auto-increment, with one important exception.

    The exception is for mixed-mode inserts, where the user provides explicit values for an AUTO_INCREMENT column for some, but not all, rows in a multiple-row simple insert. For such inserts, InnoDB allocates more auto-increment values than the number of rows to be inserted. However, all values automatically assigned are consecutively generated (and thus higher than) the auto-increment value generated by the most recently executed previous statement. Excess numbers are lost.

  • innodb_autoinc_lock_mode = 2 (interleaved lock mode)

    In this lock mode, no INSERT-like statements use the table-level AUTO-INC lock, and multiple statements can execute at the same time. This is the fastest and most scalable lock mode, but it is not safe when using statement-based replication or recovery scenarios when SQL statements are replayed from the binary log.

    In this lock mode, auto-increment values are guaranteed to be unique and monotonically increasing across all concurrently executing INSERT-like statements. However, because multiple statements can be generating numbers at the same time (that is, allocation of numbers is interleaved across statements), the values generated for the rows inserted by any given statement may not be consecutive.

    If the only statements executing are simple inserts where the number of rows to be inserted is known ahead of time, there are no gaps in the numbers generated for a single statement, except for mixed-mode inserts. However, when bulk inserts are executed, there may be gaps in the auto-increment values assigned by any given statement.

InnoDB AUTO_INCREMENT Lock Mode Usage Implications

  • Using auto-increment with replication

    If you are using statement-based replication, set innodb_autoinc_lock_mode to 0 or 1 and use the same value on the master and its slaves. Auto-increment values are not ensured to be the same on the slaves as on the master if you use innodb_autoinc_lock_mode = 2 (interleaved) or configurations where the master and slaves do not use the same lock mode.

    If you are using row-based or mixed-format replication, all of the auto-increment lock modes are safe, since row-based replication is not sensitive to the order of execution of the SQL statements (and the mixed format uses row-based replication for any statements that are unsafe for statement-based replication).

  • Lost auto-increment values and sequence gaps

    In all lock modes (0, 1, and 2), if a transaction that generated auto-increment values rolls back, those auto-increment values are lost. Once a value is generated for an auto-increment column, it cannot be rolled back, whether or not the INSERT-like statement is completed, and whether or not the containing transaction is rolled back. Such lost values are not reused. Thus, there may be gaps in the values stored in an AUTO_INCREMENT column of a table.

  • Specifying NULL or 0 for the AUTO_INCREMENT column

    In all lock modes (0, 1, and 2), if a user specifies NULL or 0 for the AUTO_INCREMENT column in an INSERT, InnoDB treats the row as if the value was not specified and generates a new value for it.

  • Assigning a negative value to the AUTO_INCREMENT column

    In all lock modes (0, 1, and 2), the behavior of the auto-increment mechanism is not defined if you assign a negative value to the AUTO_INCREMENT column.

  • If the AUTO_INCREMENT value becomes larger than the maximum integer for the specified integer type

    In all lock modes (0, 1, and 2), the behavior of the auto-increment mechanism is not defined if the value becomes larger than the maximum integer that can be stored in the specified integer type.

  • Gaps in auto-increment values for bulk inserts

    With innodb_autoinc_lock_mode set to 0 (traditional) or 1 (consecutive), the auto-increment values generated by any given statement are consecutive, without gaps, because the table-level AUTO-INC lock is held until the end of the statement, and only one such statement can execute at a time.

    With innodb_autoinc_lock_mode set to 2 (interleaved), there may be gaps in the auto-increment values generated by bulk inserts, but only if there are concurrently executing INSERT-like statements.

    For lock modes 1 or 2, gaps may occur between successive statements because for bulk inserts the exact number of auto-increment values required by each statement may not be known and overestimation is possible.

  • Auto-increment values assigned by mixed-mode inserts

    Consider a mixed-mode insert, where a simple insert specifies the auto-increment value for some (but not all) resulting rows. Such a statement behaves differently in lock modes 0, 1, and 2. For example, assume c1 is an AUTO_INCREMENT column of table t1, and that the most recent automatically generated sequence number is 100.

    mysql> CREATE TABLE t1 (
        -> c1 INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, 
        -> c2 CHAR(1)
        -> ) ENGINE = INNODB;
    
    mysql> INSERT INTO t1 VALUES(1,'a'),(101,'b'),(5,'c'),(102,'d');

    Now, consider the following mixed-mode insert statement:

    mysql> INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');
    

    With innodb_autoinc_lock_mode set to 0 (traditional), the four new rows are:

    mysql> SELECT c1, c2 FROM t1 ORDER BY c2;
    +-----+------+
    | c1  | c2   |
    +-----+------+
    |   1 | a    |
    | 101 | b    |
    |   5 | c    |
    | 102 | d    |
    +-----+------+
    

    The next available auto-increment value is 103 because the auto-increment values are allocated one at a time, not all at once at the beginning of statement execution. This result is true whether or not there are concurrently executing INSERT-like statements (of any type).

    With innodb_autoinc_lock_mode set to 1 (consecutive), the four new rows are also:

    mysql> SELECT c1, c2 FROM t1 ORDER BY c2;
    +-----+------+
    | c1  | c2   |
    +-----+------+
    |   1 | a    |
    | 101 | b    |
    |   5 | c    |
    | 102 | d    |
    +-----+------+
    

    However, in this case, the next available auto-increment value is 105, not 103 because four auto-increment values are allocated at the time the statement is processed, but only two are used. This result is true whether or not there are concurrently executing INSERT-like statements (of any type).

    With innodb_autoinc_lock_mode set to mode 2 (interleaved), the four new rows are:

    mysql> SELECT c1, c2 FROM t1 ORDER BY c2;
    +-----+------+
    | c1  | c2   |
    +-----+------+
    |   1 | a    |
    |   x | b    |
    |   5 | c    |
    |   y | d    |
    +-----+------+
    

    The values of x and y are unique and larger than any previously generated rows. However, the specific values of x and y depend on the number of auto-increment values generated by concurrently executing statements.

    Finally, consider the following statement, issued when the most-recently generated sequence number was the value 4:

    mysql> INSERT INTO t1 (c1,c2) VALUES (1,'a'), (NULL,'b'), (5,'c'), (NULL,'d');
    

    With any innodb_autoinc_lock_mode setting, this statement generates a duplicate-key error 23000 (Can't write; duplicate key in table) because 5 is allocated for the row (NULL, 'b') and insertion of the row (5, 'c') fails.

  • Modifying AUTO_INCREMENT column values in the middle of a sequence of INSERT statements

    In all lock modes (0, 1, and 2), modifying an AUTO_INCREMENT column value in the middle of a sequence of INSERT statements could lead to Duplicate entry errors. For example, if you perform an UPDATE operation that changes an AUTO_INCREMENT column value to a value larger than the current maximum auto-increment value, subsequent INSERT operations that do not specify an unused auto-increment value could encounter Duplicate entry errors. This behavior is demonstrated in the following example.

    mysql> CREATE TABLE t1 (
        -> c1 INT NOT NULL AUTO_INCREMENT,
        -> PRIMARY KEY (c1)
        ->  ) ENGINE = InnoDB;
    
    mysql> INSERT INTO t1 VALUES(0), (0), (3);
    
    mysql> SELECT c1 FROM t1;
    +----+
    | c1 |
    +----+
    |  1 |
    |  2 |
    |  3 |
    +----+
    
    mysql> UPDATE t1 SET c1 = 4 WHERE c1 = 1;
    
    mysql> SELECT c1 FROM t1;
    +----+
    | c1 |
    +----+
    |  2 |
    |  3 |
    |  4 |
    +----+
    
    mysql> INSERT INTO t1 VALUES(0);
    ERROR 1062 (23000): Duplicate entry '4' for key 'PRIMARY'

InnoDB AUTO_INCREMENT Counter Initialization

This section describes how InnoDB initializes AUTO_INCREMENT counters.

If you specify an AUTO_INCREMENT column for an InnoDB table, the table handle in the InnoDB data dictionary contains a special counter called the auto-increment counter that is used in assigning new values for the column. This counter is stored only in main memory, not on disk.

To initialize an auto-increment counter after a server restart, InnoDB executes the equivalent of the following statement on the first insert into a table containing an AUTO_INCREMENT column.

SELECT MAX(ai_col) FROM table_name FOR UPDATE;

InnoDB increments the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. By default, the value is incremented by 1. This default can be overridden by the auto_increment_increment configuration setting.

If the table is empty, InnoDB uses the value 1. This default can be overridden by the auto_increment_offset configuration setting.

If a SHOW TABLE STATUS statement examines the table before the auto-increment counter is initialized, InnoDB initializes but does not increment the value. The value is stored for use by later inserts. This initialization uses a normal exclusive-locking read on the table and the lock lasts to the end of the transaction. InnoDB follows the same procedure for initializing the auto-increment counter for a newly created table.

After the auto-increment counter has been initialized, if you do not explicitly specify a value for an AUTO_INCREMENT column, InnoDB increments the counter and assigns the new value to the column. If you insert a row that explicitly specifies the column value, and the value is greater than the current counter value, the counter is set to the specified column value.

InnoDB uses the in-memory auto-increment counter as long as the server runs. When the server is stopped and restarted, InnoDB reinitializes the counter for each table for the first INSERT to the table, as described earlier.

A server restart also cancels the effect of the AUTO_INCREMENT = N table option in CREATE TABLE and ALTER TABLE statements, which you can use with InnoDB tables to set the initial counter value or alter the current counter value.

15.8.7 InnoDB and FOREIGN KEY Constraints

This section describes differences in the InnoDB storage engine's handling of foreign keys as compared with that of the MySQL Server.

For foreign key usage information and examples, see Section 14.1.18.5, “Using FOREIGN KEY Constraints”.

Foreign Key Definitions

Foreign key definitions for InnoDB tables are subject to the following conditions:

  • InnoDB permits a foreign key to reference any index column or group of columns. However, in the referenced table, there must be an index where the referenced columns are listed as the first columns in the same order.

  • InnoDB does not currently support foreign keys for tables with user-defined partitioning. This means that no user-partitioned InnoDB table may contain foreign key references or columns referenced by foreign keys.

  • InnoDB allows a foreign key constraint to reference a non-unique key. This is an InnoDB extension to standard SQL.

Referential Actions

Referential actions for foreign keys of InnoDB tables are subject to the following conditions:

  • While SET DEFAULT is allowed by the MySQL Server, it is rejected as invalid by InnoDB. CREATE TABLE and ALTER TABLE statements using this clause are not allowed for InnoDB tables.

  • If there are several rows in the parent table that have the same referenced key value, InnoDB acts in foreign key checks as if the other parent rows with the same key value do not exist. For example, if you have defined a RESTRICT type constraint, and there is a child row with several parent rows, InnoDB does not permit the deletion of any of those parent rows.

  • InnoDB performs cascading operations through a depth-first algorithm, based on records in the indexes corresponding to the foreign key constraints.

  • If ON UPDATE CASCADE or ON UPDATE SET NULL recurses to update the same table it has previously updated during the cascade, it acts like RESTRICT. This means that you cannot use self-referential ON UPDATE CASCADE or ON UPDATE SET NULL operations. This is to prevent infinite loops resulting from cascaded updates. A self-referential ON DELETE SET NULL, on the other hand, is possible, as is a self-referential ON DELETE CASCADE. Cascading operations may not be nested more than 15 levels deep.

  • Like MySQL in general, in an SQL statement that inserts, deletes, or updates many rows, InnoDB checks UNIQUE and FOREIGN KEY constraints row-by-row. When performing foreign key checks, InnoDB sets shared row-level locks on child or parent records it has to look at. InnoDB checks foreign key constraints immediately; the check is not deferred to transaction commit. According to the SQL standard, the default behavior should be deferred checking. That is, constraints are only checked after the entire SQL statement has been processed. Until InnoDB implements deferred constraint checking, some things are impossible, such as deleting a record that refers to itself using a foreign key.

Foreign Key Restrictions for Generated Columns and Virtual Indexes

  • A foreign key constraint on a generated stored column cannot use ON UPDATE CASCADE, ON DELETE SET NULL, ON UPDATE SET NULL, ON DELETE SET DEFAULT, or ON UPDATE SET DEFAULT.

  • A foreign key constraint cannot reference a generated virtual column.

  • Prior to 5.7.16, a foreign key constraint cannot reference a secondary index defined on a generated virtual column.

  • In MySQL 5.7.13 and earlier, InnoDB does not permit defining a foreign key constraint with a cascading referential action on the base column of an indexed generated virtual column. This restriction is lifted in MySQL 5.7.14.

  • In MySQL 5.7.13 and earlier, InnoDB does not permit defining cascading referential actions on non-virtual foreign key columns that are explicitly included in a virtual index. This restriction is lifted in MySQL 5.7.14.

Foreign Key Usage and Error Information

You can obtain general information about foreign keys and their usage from querying the INFORMATION_SCHEMA.KEY_COLUMN_USAGE table, and more information more specific to InnoDB tables can be found in the INNODB_SYS_FOREIGN and INNODB_SYS_FOREIGN_COLS tables, also in the INFORMATION_SCHEMA database.

In addition to SHOW ERRORS, in the event of a foreign key error involving InnoDB tables (usually Error 150 in the MySQL Server), you can obtain a detailed explanation of the most recent InnoDB foreign key error by checking the output of SHOW ENGINE INNODB STATUS.

15.8.8 Limits on InnoDB Tables

Warning

Do not convert MySQL system tables in the mysql database from MyISAM to InnoDB tables. This is an unsupported operation. If you do this, MySQL does not restart until you restore the old system tables from a backup or regenerate them by reinitializing the data directory (see Section 2.10.1, “Initializing the Data Directory”).

Warning

Before using NFS with InnoDB, review potential issues outlined in Using NFS with MySQL.

Maximums and Minimums

  • A table can contain a maximum of 1017 columns (raised in MySQL 5.6.9 from the earlier limit of 1000). Virtual generated columns are included in this limit.

  • A table can contain a maximum of 64 secondary indexes.

  • If innodb_large_prefix is enabled (the default), the index key prefix limit is 3072 bytes for InnoDB tables that use DYNAMIC or COMPRESSED row format. If innodb_large_prefix is disabled, the index key prefix limit is 767 bytes for tables of any row format.

    innodb_large_prefix is deprecated and will be removed in a future release. innodb_large_prefix was introduced in MySQL 5.5 to disable large index key prefixes for compatibility with earlier versions of InnoDB that do not support large index key prefixes.

    The index key prefix length limit is 767 bytes for InnoDB tables that use the REDUNDANT or COMPACT row format. For example, you might hit this limit with a column prefix index of more than 255 characters on a TEXT or VARCHAR column, assuming a utf8mb3 character set and the maximum of 3 bytes for each character.

    Attempting to use an index key prefix length that exceeds the limit returns an error. To avoid such errors in replication configurations, avoid enabling innodb_large_prefix on the master if it cannot also be enabled on slaves.

    The limits that apply to index key prefixes also apply to full-column index keys.

  • If you reduce the InnoDB page size to 8KB or 4KB by specifying the innodb_page_size option when creating the MySQL instance, the maximum length of the index key is lowered proportionally, based on the limit of 3072 bytes for a 16KB page size. That is, the maximum index key length is 1536 bytes when the page size is 8KB, and 768 bytes when the page size is 4KB.

  • A maximum of 16 columns is permitted for multicolumn indexes. Exceeding the limit returns an error.

    ERROR 1070 (42000): Too many key parts specified; max 16 parts allowed
    
  • The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a page for 4KB, 8KB, 16KB, and 32KB page sizes. For example, the maximum row length for the default innodb_page_size of 16KB is about 8000 bytes. For an InnoDB page size of 64KB, the maximum row length is about 16000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.

    If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 15.12.2, “File Space Management”.

  • Although InnoDB supports row sizes larger than 65,535 bytes internally, MySQL itself imposes a row-size limit of 65,535 for the combined size of all columns:

    mysql> CREATE TABLE t (a VARCHAR(8000), b VARCHAR(10000),
        -> c VARCHAR(10000), d VARCHAR(10000), e VARCHAR(10000),
        -> f VARCHAR(10000), g VARCHAR(10000)) ENGINE=InnoDB;
    ERROR 1118 (42000): Row size too large. The maximum row size for the
    used table type, not counting BLOBs, is 65535. You have to change some
    columns to TEXT or BLOBs
    

    See Section C.10.4, “Limits on Table Column Count and Row Size”.

  • On some older operating systems, files must be less than 2GB. This is not a limitation of InnoDB itself, but if you require a large tablespace, configure it using several smaller data files rather than one large data file.

  • The combined size of the InnoDB log files can be up to 512GB.

  • The minimum tablespace size is slightly larger than 10MB. The maximum tablespace size depends on the InnoDB page size.

    Table 15.6 InnoDB Maximum Tablespace Size

    InnoDB Page SizeMaximum Tablespace Size
    4KB16TB
    8KB32TB
    16KB64TB
    32KB128TB
    64KB256TB

    The maximum tablespace size is also the maximum size for a table.

  • The default page size in InnoDB is 16KB. You can increase or decrease the page size by configuring the innodb_page_size option when creating the MySQL instance.

    ROW_FORMAT=COMPRESSED in the Barracuda file format assumes that the page size is at most 16KB and uses 14-bit pointers.

    32KB and 64KB page sizes are supported, but ROW_FORMAT=COMPRESSED is unsupported for page sizes greater than 16KB. For both 32KB and 64KB page sizes, the maximum record size is 16KB. For innodb_page_size=32k, extent size is 2MB. For innodb_page_size=64k, extent size is 4MB.

    A MySQL instance using a particular InnoDB page size cannot use data files or log files from an instance that uses a different page size.

Restrictions on InnoDB Tables

  • ANALYZE TABLE determines index cardinality (as displayed in the Cardinality column of SHOW INDEX output) by doing random dives to each of the index trees and updating index cardinality estimates accordingly. Because these are only estimates, repeated runs of ANALYZE TABLE could produce different numbers. This makes ANALYZE TABLE fast on InnoDB tables but not 100% accurate because it does not take all rows into account.

    You can make the statistics collected by ANALYZE TABLE more precise and more stable by turning on the innodb_stats_persistent configuration option, as explained in Section 15.6.12.1, “Configuring Persistent Optimizer Statistics Parameters”. When that setting is enabled, it is important to run ANALYZE TABLE after major changes to indexed column data, because the statistics are not recalculated periodically (such as after a server restart) as they traditionally have been.

    You can change the number of random dives by modifying the innodb_stats_persistent_sample_pages system variable (if the persistent statistics setting is turned on), or the innodb_stats_transient_sample_pages system variable (if the persistent statistics setting is turned off).

    MySQL uses index cardinality estimates only in join optimization. If some join is not optimized in the right way, you can try using ANALYZE TABLE. In the few cases that ANALYZE TABLE does not produce values good enough for your particular tables, you can use FORCE INDEX with your queries to force the use of a particular index, or set the max_seeks_for_key system variable to ensure that MySQL prefers index lookups over table scans. See Section 6.1.5, “Server System Variables”, and Section B.5.5, “Optimizer-Related Issues”.

  • If statements or transactions are running on a table and ANALYZE TABLE is run on the same table followed by a second ANALYZE TABLE operation, the second ANALYZE TABLE operation is blocked until the statements or transactions are completed. This behavior occurs because ANALYZE TABLE marks the currently loaded table definition as obsolete when ANALYZE TABLE is finished running. New statements or transactions (including a second ANALYZE TABLE statement) must load the new table definition into the table cache, which cannot occur until currently running statements or transactions are completed and the old table definition is purged. Loading multiple concurrent table definitions is not supported.

  • SHOW TABLE STATUS does not give accurate statistics on InnoDB tables, except for the physical size reserved by the table. The row count is only a rough estimate used in SQL optimization.

  • InnoDB does not keep an internal count of rows in a table because concurrent transactions might see different numbers of rows at the same time. Consequently, SELECT COUNT(*) statements only count rows visible to the current transaction.

    Prior to MySQL 5.7.18, InnoDB processes SELECT COUNT(*) statements by scanning the clustered index. As of MySQL 5.7.18, InnoDB processes SELECT COUNT(*) statements by traversing a smaller secondary index, if present.

    Processing SELECT COUNT(*) statements takes some time if index records are not entirely in the buffer pool. For a faster count, you can create a counter table and let your application update it according to the inserts and deletes it does. However, this method may not scale well in situations where thousands of concurrent transactions are initiating updates to the same counter table. If an approximate row count is sufficient, SHOW TABLE STATUS can be used.

    InnoDB handles SELECT COUNT(*) and SELECT COUNT(1) operations in the same way. There is no performance difference.

  • On Windows, InnoDB always stores database and table names internally in lowercase. To move databases in a binary format from Unix to Windows or from Windows to Unix, create all databases and tables using lowercase names.

  • An AUTO_INCREMENT column ai_col must be defined as part of an index such that it is possible to perform the equivalent of an indexed SELECT MAX(ai_col) lookup on the table to obtain the maximum column value. Typically, this is achieved by making the column the first column of some table index.

  • InnoDB sets an exclusive lock on the end of the index associated with the AUTO_INCREMENT column while initializing a previously specified AUTO_INCREMENT column on a table.

    With innodb_autoinc_lock_mode=0, InnoDB uses a special AUTO-INC table lock mode where the lock is obtained and held to the end of the current SQL statement while accessing the auto-increment counter. Other clients cannot insert into the table while the AUTO-INC table lock is held. The same behavior occurs for bulk inserts with innodb_autoinc_lock_mode=1. Table-level AUTO-INC locks are not used with innodb_autoinc_lock_mode=2. For more information, See Section 15.8.6, “AUTO_INCREMENT Handling in InnoDB”.

  • When you restart the MySQL server, InnoDB may reuse an old value that was generated for an AUTO_INCREMENT column but never stored (that is, a value that was generated during an old transaction that was rolled back).

  • When an AUTO_INCREMENT integer column runs out of values, a subsequent INSERT operation returns a duplicate-key error. This is general MySQL behavior, similar to how MyISAM works.

  • DELETE FROM tbl_name does not regenerate the table but instead deletes all rows, one by one.

  • Cascaded foreign key actions do not activate triggers.

  • You cannot create a table with a column name that matches the name of an internal InnoDB column (including DB_ROW_ID, DB_TRX_ID, DB_ROLL_PTR, and DB_MIX_ID). This restriction applies to use of the names in any letter case.

    mysql> CREATE TABLE t1 (c1 INT, db_row_id INT) ENGINE=INNODB;
    ERROR 1166 (42000): Incorrect column name 'db_row_id'
    

Locking and Transactions

  • LOCK TABLES acquires two locks on each table if innodb_table_locks=1 (the default). In addition to a table lock on the MySQL layer, it also acquires an InnoDB table lock. Versions of MySQL before 4.1.2 did not acquire InnoDB table locks; the old behavior can be selected by setting innodb_table_locks=0. If no InnoDB table lock is acquired, LOCK TABLES completes even if some records of the tables are being locked by other transactions.

    In MySQL 5.7, innodb_table_locks=0 has no effect for tables locked explicitly with LOCK TABLES ... WRITE. It does have an effect for tables locked for read or write by LOCK TABLES ... WRITE implicitly (for example, through triggers) or by LOCK TABLES ... READ.

  • All InnoDB locks held by a transaction are released when the transaction is committed or aborted. Thus, it does not make much sense to invoke LOCK TABLES on InnoDB tables in autocommit=1 mode because the acquired InnoDB table locks would be released immediately.

  • You cannot lock additional tables in the middle of a transaction because LOCK TABLES performs an implicit COMMIT and UNLOCK TABLES.

  • The limit on data-modifying transactions is 96 * 1023 concurrent transactions that generate undo records. 32 of 128 rollback segments are assigned to non-redo logs for transactions that modify temporary tables and related objects. This reduces the maximum number of concurrent data-modifying transactions from 128K to 96K. The 96K limit assumes that transactions do not modify temporary tables. If all data-modifying transactions also modify temporary tables, the limit is 32K concurrent transactions.

15.8.9 Clustered and Secondary Indexes

Every InnoDB table has a special index called the clustered index where the data for the rows is stored. Typically, the clustered index is synonymous with the primary key. To get the best performance from queries, inserts, and other database operations, you must understand how InnoDB uses the clustered index to optimize the most common lookup and DML operations for each table.

  • When you define a PRIMARY KEY on your table, InnoDB uses it as the clustered index. Define a primary key for each table that you create. If there is no logical unique and non-null column or set of columns, add a new auto-increment column, whose values are filled in automatically.

  • If you do not define a PRIMARY KEY for your table, MySQL locates the first UNIQUE index where all the key columns are NOT NULL and InnoDB uses it as the clustered index.

  • If the table has no PRIMARY KEY or suitable UNIQUE index, InnoDB internally generates a hidden clustered index on a synthetic column containing row ID values. The rows are ordered by the ID that InnoDB assigns to the rows in such a table. The row ID is a 6-byte field that increases monotonically as new rows are inserted. Thus, the rows ordered by the row ID are physically in insertion order.

How the Clustered Index Speeds Up Queries

Accessing a row through the clustered index is fast because the index search leads directly to the page with all the row data. If a table is large, the clustered index architecture often saves a disk I/O operation when compared to storage organizations that store row data using a different page from the index record. (For example, MyISAM uses one file for data rows and another for index records.)

How Secondary Indexes Relate to the Clustered Index

All indexes other than the clustered index are known as secondary indexes. In InnoDB, each record in a secondary index contains the primary key columns for the row, as well as the columns specified for the secondary index. InnoDB uses this primary key value to search for the row in the clustered index.

If the primary key is long, the secondary indexes use more space, so it is advantageous to have a short primary key.

For coding guidelines to take advantage of InnoDB clustered and secondary indexes, see Section 9.3.2, “Using Primary Keys” Section 9.3, “Optimization and Indexes” Section 9.5, “Optimizing for InnoDB Tables” Section 9.3.2, “Using Primary Keys”.

15.8.10 InnoDB FULLTEXT Indexes

FULLTEXT indexes are created on text-based columns (CHAR, VARCHAR, or TEXT columns) to help speed up queries and DML operations on data contained within those columns, omitting any words that are defined as stopwords.

A FULLTEXT index can be defined as part of a CREATE TABLE statement, or added later using ALTER TABLE or CREATE INDEX.

Full-text searching is performed using MATCH() ... AGAINST syntax. For usage information, see Section 13.9, “Full-Text Search Functions”.

There are several aspects to InnoDB FULLTEXT indexes, described under the following topics in this section:

InnoDB Full-Text Index Design

InnoDB FULLTEXT indexes have an inverted index design. Inverted indexes store a list of words, and for each word, a list of documents that the word appears in. To support proximity search, position information for each word is also stored, as a byte offset.

InnoDB Full-Text Index Tables

For each InnoDB FULLTEXT index, a set of index tables is created, as shown in the following example:

CREATE TABLE opening_lines (
id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
opening_line TEXT(500),
author VARCHAR(200),
title VARCHAR(200),
FULLTEXT idx (opening_line)
) ENGINE=InnoDB;

mysql> SELECT table_id, name, space from INFORMATION_SCHEMA.INNODB_SYS_TABLES
WHERE name LIKE 'test/%';
+----------+----------------------------------------------------+-------+
| table_id | name                                               | space |
+----------+----------------------------------------------------+-------+
|      333 | test/FTS_0000000000000147_00000000000001c9_INDEX_1 |   289 |
|      334 | test/FTS_0000000000000147_00000000000001c9_INDEX_2 |   290 |
|      335 | test/FTS_0000000000000147_00000000000001c9_INDEX_3 |   291 |
|      336 | test/FTS_0000000000000147_00000000000001c9_INDEX_4 |   292 |
|      337 | test/FTS_0000000000000147_00000000000001c9_INDEX_5 |   293 |
|      338 | test/FTS_0000000000000147_00000000000001c9_INDEX_6 |   294 |
|      330 | test/FTS_0000000000000147_BEING_DELETED            |   286 |
|      331 | test/FTS_0000000000000147_BEING_DELETED_CACHE      |   287 |
|      332 | test/FTS_0000000000000147_CONFIG                   |   288 |
|      328 | test/FTS_0000000000000147_DELETED                  |   284 |
|      329 | test/FTS_0000000000000147_DELETED_CACHE            |   285 |
|      327 | test/opening_lines                                 |   283 |
+----------+----------------------------------------------------+-------+ 

The first six tables represent the inverted index and are referred to as auxiliary index tables. When incoming documents are tokenized, the individual words (also referred to as tokens) are inserted into the index tables along with position information and the associated Document ID (DOC_ID). The words are fully sorted and partitioned among the six index tables based on the character set sort weight of the word's first character.

The inverted index is partitioned into six auxiliary index tables to support parallel index creation. By default, two threads tokenize, sort, and insert words and associated data into the index tables. The number of threads is configurable using the innodb_ft_sort_pll_degree option. When creating FULLTEXT indexes on large tables, consider increasing the number of threads.

Auxiliary index table names are prefixed with FTS_ and postfixed with INDEX_*. Each index table is associated with the indexed table by a hex value in the index table name that matches the table_id of the indexed table. For example, the table_id of the test/opening_lines table is 327, for which the hex value is 0x147. As shown in the preceding example, the 147 hex value appears in the names of index tables that are associated with the test/opening_lines table.

A hex value representing the index_id of the FULLTEXT index also appears in auxiliary index table names. For example, in the auxiliary table name test/FTS_0000000000000147_00000000000001c9_INDEX_1, the hex value 1c9 has a decimal value of 457. The index defined on the opening_lines table (idx) can be identified by querying the INFORMATION_SCHEMA.INNODB_SYS_INDEXES table for this value (457).

mysql> SELECT index_id, name, table_id, space from INFORMATION_SCHEMA.INNODB_SYS_INDEXES
  WHERE index_id=457;
+----------+------+----------+-------+
| index_id | name | table_id | space |
+----------+------+----------+-------+
|      457 | idx  |      327 |   283 |
+----------+------+----------+-------+

Index tables are stored in their own tablespace if the primary table is created in a file-per-table tablespace.

The other index tables shown in the preceding example are used for deletion handling and for storing the internal state of the FULLTEXT index.

  • FTS_*_DELETED and FTS_*_DELETED_CACHE: Contain the document IDs (DOC_ID) for documents that are deleted but whose data is not yet removed from the full-text index. The FTS_*_DELETED_CACHE is the in-memory version of the FTS_*_DELETED table.

  • FTS_*_BEING_DELETED and FTS_*_BEING_DELETED_CACHE: Contain the document IDs (DOC_ID) for documents that are deleted and whose data is currently in the process of being removed from the full-text index. The FTS_*_BEING_DELETED_CACHE table is the in-memory version of the FTS_*_BEING_DELETED table.

  • FTS_*_CONFIG: Stores information about the internal state of the FULLTEXT index. Most importantly, it stores the FTS_SYNCED_DOC_ID, which identifies documents that have been parsed and flushed to disk. In case of crash recovery, FTS_SYNCED_DOC_ID values are used to identify documents that have not been flushed to disk so that the documents can be re-parsed and added back to the FULLTEXT index cache. To view the data in this table, query the INFORMATION_SCHEMA.INNODB_FT_CONFIG table.

InnoDB Full-Text Index Cache

When a document is inserted, it is tokenized, and the individual words and associated data are inserted into the FULLTEXT index. This process, even for small documents, could result in numerous small insertions into the auxiliary index tables, making concurrent access to these tables a point of contention. To avoid this problem, InnoDB uses a FULLTEXT index cache to temporarily cache index table insertions for recently inserted rows. This in-memory cache structure holds insertions until the cache is full and then batch flushes them to disk (to the auxiliary index tables). You can query the INFORMATION_SCHEMA.INNODB_FT_INDEX_CACHE table to view tokenized data for recently inserted rows.

The caching and batch flushing behavior avoids frequent updates to auxiliary index tables, which could result in concurrent access issues during busy insert and update times. The batching technique also avoids multiple insertions for the same word, and minimizes duplicate entries. Instead of flushing each word individually, insertions for the same word are merged and flushed to disk as a single entry, improving insertion efficiency while keeping auxiliary index tables as small as possible.

The innodb_ft_cache_size variable is used to configure the full-text index cache size (on a per-table basis), which affects how often the full-text index cache is flushed. You can also define a global full-text index cache size limit for all tables in a given instance using the innodb_ft_total_cache_size option.

The full-text index cache stores the same information as auxiliary index tables. However, the full-text index cache only caches tokenized data for recently inserted rows. The data that is already flushed to disk (to the full-text auxiliary tables) is not brought back into the full-text index cache when queried. The data in auxiliary index tables is queried directly, and results from the auxiliary index tables are merged with results from the full-text index cache before being returned.

InnoDB Full-Text Index Document ID and FTS_DOC_ID Column

InnoDB uses a unique document identifier referred to as a Document ID (DOC_ID) to map words in the full-text index to document records where the word appears. The mapping requires an FTS_DOC_ID column on the indexed table. If an FTS_DOC_ID column is not defined, InnoDB automatically adds a hidden FTS_DOC_ID column when the full-text index is created. The following example demonstrates this behavior.

The following table definition does not include an FTS_DOC_ID column:

CREATE TABLE opening_lines (
id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
opening_line TEXT(500),
author VARCHAR(200),
title VARCHAR(200)
) ENGINE=InnoDB;    

When you create a full-text index on the table using CREATE FULLTEXT INDEX syntax, a warning is returned which reports that InnoDB is rebuilding the table to add the FTS_DOC_ID column.

mysql> CREATE FULLTEXT INDEX idx ON opening_lines(opening_line);
Query OK, 0 rows affected, 1 warning (0.19 sec)
Records: 0  Duplicates: 0  Warnings: 1

mysql> SHOW WARNINGS;
+---------+------+--------------------------------------------------+
| Level   | Code | Message                                          |
+---------+------+--------------------------------------------------+
| Warning |  124 | InnoDB rebuilding table to add column FTS_DOC_ID |
+---------+------+--------------------------------------------------+

The same warning is returned when using ALTER TABLE to add a full-text index to a table that does not have an FTS_DOC_ID column. If you create a full-text index at CREATE TABLE time and do not specify an FTS_DOC_ID column, InnoDB adds a hidden FTS_DOC_ID column, without warning.

Defining an FTS_DOC_ID column at CREATE TABLE time reduces the time required to create a full-text index on a table that is already loaded with data. If an FTS_DOC_ID column is defined on a table prior to loading data, the table and its indexes do not have to be rebuilt to add the new column. If you are not concerned with CREATE FULLTEXT INDEX performance, leave out the FTS_DOC_ID column to have InnoDB create it for you. InnoDB creates a hidden FTS_DOC_ID column along with a unique index (FTS_DOC_ID_INDEX) on the FTS_DOC_ID column. If you want to create your own FTS_DOC_ID column, the column must be defined as BIGINT UNSIGNED NOT NULL and named FTS_DOC_ID (all upper case), as in the following example:

Note

The FTS_DOC_ID column does not need to be defined as an AUTO_INCREMENT column but AUTO_INCREMENT could make loading data easier.

CREATE TABLE opening_lines (
FTS_DOC_ID BIGINT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
opening_line TEXT(500),
author VARCHAR(200),
title VARCHAR(200)
) ENGINE=InnoDB;    

If you choose to define the FTS_DOC_ID column yourself, you are responsible for managing the column to avoid empty or duplicate values. FTS_DOC_ID values cannot be reused, which means FTS_DOC_ID values must be ever increasing.

Optionally, you can create the required unique FTS_DOC_ID_INDEX (all upper case) on the FTS_DOC_ID column.

CREATE UNIQUE INDEX FTS_DOC_ID_INDEX on opening_lines(FTS_DOC_ID);

If you do not create the FTS_DOC_ID_INDEX, InnoDB creates it automatically.

Before MySQL 5.7.13, the permitted gap between the largest used FTS_DOC_ID value and new FTS_DOC_ID value is 10000. In MySQL 5.7.13 and later, the permitted gap is 65535.

InnoDB Full-Text Index Deletion Handling

Deleting a record that has a full-text index column could result in numerous small deletions in the auxiliary index tables, making concurrent access to these tables a point of contention. To avoid this problem, the Document ID (DOC_ID) of a deleted document is logged in a special FTS_*_DELETED table whenever a record is deleted from an indexed table, and the indexed record remains in the full-text index. Before returning query results, information in the FTS_*_DELETED table is used to filter out deleted Document IDs. The benefit of this design is that deletions are fast and inexpensive. The drawback is that the size of the index is not immediately reduced after deleting records. To remove full-text index entries for deleted records, you must run OPTIMIZE TABLE on the indexed table with innodb_optimize_fulltext_only=ON to rebuild the full-text index. For more information, see Optimizing InnoDB Full-Text Indexes.

InnoDB Full-Text Index Transaction Handling

InnoDB FULLTEXT indexes have special transaction handling characteristics due its caching and batch processing behavior. Specifically, updates and insertions on a FULLTEXT index are processed at transaction commit time, which means that a FULLTEXT search can only see committed data. The following example demonstrates this behavior. The FULLTEXT search only returns a result after the inserted lines are committed.

mysql> CREATE TABLE opening_lines (
id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
opening_line TEXT(500),
author VARCHAR(200),
title VARCHAR(200),
FULLTEXT idx (opening_line)
) ENGINE=InnoDB;

mysql> BEGIN;
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO opening_lines(opening_line,author,title) VALUES
('Call me Ishmael.','Herman Melville','Moby-Dick'),
('A screaming comes across the sky.','Thomas Pynchon','Gravity\'s Rainbow'),
('I am an invisible man.','Ralph Ellison','Invisible Man'),
('Where now? Who now? When now?','Samuel Beckett','The Unnamable'),
('It was love at first sight.','Joseph Heller','Catch-22'),
('All this happened, more or less.','Kurt Vonnegut','Slaughterhouse-Five'),
('Mrs. Dalloway said she would buy the flowers herself.','Virginia Woolf','Mrs. Dalloway'),
('It was a pleasure to burn.','Ray Bradbury','Fahrenheit 451');
Query OK, 8 rows affected (0.00 sec)
Records: 8  Duplicates: 0  Warnings: 0

mysql> SELECT COUNT(*) FROM opening_lines WHERE MATCH(opening_line) AGAINST('Ishmael');
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+

mysql> COMMIT;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT COUNT(*) FROM opening_lines WHERE MATCH(opening_line) AGAINST('Ishmael');
+----------+
| COUNT(*) |
+----------+
|        1 |
+----------+

Monitoring InnoDB Full-Text Indexes

You can monitor and examine the special text-processing aspects of InnoDB FULLTEXT indexes by querying the following INFORMATION_SCHEMA tables:

You can also view basic information for FULLTEXT indexes and tables by querying INNODB_SYS_INDEXES and INNODB_SYS_TABLES.

See Section 15.15.4, “InnoDB INFORMATION_SCHEMA FULLTEXT Index Tables” for more information.

15.8.11 Physical Structure of an InnoDB Index

With the exception of spatial indexes, InnoDB indexes are B-tree data structures. Spatial indexes use R-trees, which are specialized data structures for indexing multi-dimensional data. Index records are stored in the leaf pages of their B-tree or R-tree data structure. The default size of an index page is 16KB.

When new records are inserted into an InnoDB clustered index, InnoDB tries to leave 1/16 of the page free for future insertions and updates of the index records. If index records are inserted in a sequential order (ascending or descending), the resulting index pages are about 15/16 full. If records are inserted in a random order, the pages are from 1/2 to 15/16 full.

InnoDB performs a bulk load when creating or rebuilding B-tree indexes. This method of index creation is known as a sorted index build. innodb_fill_factor defines the percentage of space on each B-tree page that is filled during a sorted index build, with the remaining space reserved for future index growth. Sorted index builds are not supported for spatial indexes. For more information, see Section 15.8.12, “Sorted Index Builds”. An innodb_fill_factor setting of 100 leaves 1/16 of the space in clustered index pages free for future index growth.

If the fill factor of an InnoDB index page drops below the MERGE_THRESHOLD, which is 50% by default if not specified, InnoDB tries to contract the index tree to free the page. The MERGE_THRESHOLD setting applies to both B-tree and R-tree indexes. For more information, see Section 15.6.13, “Configuring the Merge Threshold for Index Pages”.

You can configure the page size for all InnoDB tablespaces in a MySQL instance by setting the innodb_page_size configuration option before creating the instance. Once the page size for an instance is set, you cannot change it. Supported sizes are 64KB, 32KB, 16KB (default), 8KB, and 4KB, corresponding to the option values 64k, 32k, 16k, 8k, and 4k.

Support for 32KB and 64KB pages sizes was added in MySQL 5.7. For more information, refer to the innodb_page_size documentation.

A MySQL instance using a particular InnoDB page size cannot use data files or log files from an instance that uses a different page size.

15.8.12 Sorted Index Builds

InnoDB performs a bulk load instead of inserting one index record at a time when creating or rebuilding indexes. This method of index creation is also known as a sorted index build. Sorted index builds are not supported for spatial indexes.

There are three phases to an index build. In the first phase, the clustered index is scanned, and index entries are generated and added to the sort buffer. When the sort buffer becomes full, entries are sorted and written out to a temporary intermediate file. This process is also known as a run. In the second phase, with one or more runs written to the temporary intermediate file, a merge sort is performed on all entries in the file. In the third and final phase, the sorted entries are inserted into the B-tree.

Prior to the introduction of sorted index builds, index entries were inserted into the B-tree one record at a time using insert APIs. This method involved opening a B-tree cursor to find the insert position and then inserting entries into a B-tree page using an optimistic insert. If an insert failed due to a page being full, a pessimistic insert would be performed, which involves opening a B-tree cursor and splitting and merging B-tree nodes as necessary to find space for the entry. The drawbacks of this top-down method of building an index are the cost of searching for an insert position and the constant splitting and merging of B-tree nodes.

Sorted index builds use a bottom up approach to building an index. With this approach, a reference to the right-most leaf page is held at all levels of the B-tree. The right-most leaf page at the necessary B-tree depth is allocated and entries are inserted according to their sorted order. Once a leaf page is full, a node pointer is appended to the parent page and a sibling leaf page is allocated for the next insert. This process continues until all entries are inserted, which may result in inserts up to the root level. When a sibling page is allocated, the reference to the previously pinned leaf page is released, and the newly allocated leaf page becomes the right-most leaf page and new default insert location.

Reserving B-tree Page Space for Future Index Growth

To set aside space for future index growth, you can use the innodb_fill_factor configuration option to reserve a percentage of B-tree page space. For example, setting innodb_fill_factor to 80 reserves 20 percent of the space in B-tree pages during a sorted index build. This setting applies to both B-tree leaf and non-leaf pages. It does not apply to external pages used for TEXT or BLOB entries. The amount of space that is reserved may not be exactly as configured, as the innodb_fill_factor value is interpreted as a hint rather than a hard limit.

Sorted Index Builds and Full-Text Index Support

Sorted index builds are supported for fulltext indexes. Previously, SQL was used to insert entries into a fulltext index.

Sorted Index Builds and Compressed Tables

For compressed tables, the previous index creation method appended entries to both compressed and uncompressed pages. When the modification log (representing free space on the compressed page) became full, the compressed page would be recompressed. If compression failed due to a lack of space, the page would be split. With sorted index builds, entries are only appended to uncompressed pages. When an uncompressed page becomes full, it is compressed. Adaptive padding is used to ensure that compression succeeds in most cases, but if compression fails, the page is split and compression is attempted again. This process continues until compression is successful. For additional information about compression of B-Tree pages, see Section 15.9.1.5, “How Compression Works for InnoDB Tables”.

Sorted Index Builds and Redo Logging

Redo logging is turned off during a sorted index build. Instead, there is a checkpoint to ensure that the index build can withstand a crash or failure. The checkpoint forces a write of all dirty pages to disk. During a sorted index build, the page cleaner thread is signaled periodically to flush dirty pages to ensure that the checkpoint operation can be processed quickly. Normally, the page cleaner thread flushes dirty pages when the number of clean pages falls below a set threshold. For sorted index builds, dirty pages are flushed promptly to reduce checkpoint overhead and to parallelize IO and CPU activity.

Sorted Index Builds and Optimizer Statistics

Sorted index builds may result in optimizer statistics that differ from those generated by the previous method of index creation. The difference in statistics, which is not expected to affect workload performance, is due to the different algorithm that is used to populate the index.

15.9 InnoDB Table and Page Compression

This section provides information about the InnoDB table compression and InnoDB page compression features. The page compression feature is referred to as transparent page compression.

Using the compression features of InnoDB, you can create tables where the data is stored in compressed form. Compression can help to improve both raw performance and scalability. The compression means less data is transferred between disk and memory, and takes up less space on disk and in memory. The benefits are amplified for tables with secondary indexes, because index data is compressed also. Compression can be especially important for SSD storage devices, because they tend to have lower capacity than HDD devices.

15.9.1 InnoDB Table Compression

This section describes InnoDB table compression, which is supported with InnoDB tables that reside in file_per_table tablespaces or general tablespaces. Table compression is enabled using the ROW_FORMAT=COMPRESSED attribute with CREATE TABLE or ALTER TABLE.

15.9.1.1 Overview of Table Compression

Because processors and cache memories have increased in speed more than disk storage devices, many workloads are disk-bound. Data compression enables smaller database size, reduced I/O, and improved throughput, at the small cost of increased CPU utilization. Compression is especially valuable for read-intensive applications, on systems with enough RAM to keep frequently used data in memory.

An InnoDB table created with ROW_FORMAT=COMPRESSED can use a smaller page size on disk than the configured innodb_page_size value. Smaller pages require less I/O to read from and write to disk, which is especially valuable for SSD devices.

The compressed page size is specified through the CREATE TABLE or ALTER TABLE KEY_BLOCK_SIZE parameter. The different page size requires that the table be placed in a file-per-table tablespace or general tablespace rather than in the system tablespace, as the system tablespace cannot store compressed tables. For more information, see Section 15.7.4, “InnoDB File-Per-Table Tablespaces”, and Section 15.7.9, “InnoDB General Tablespaces”.

The level of compression is the same regardless of the KEY_BLOCK_SIZE value. As you specify smaller values for KEY_BLOCK_SIZE, you get the I/O benefits of increasingly smaller pages. But if you specify a value that is too small, there is additional overhead to reorganize the pages when data values cannot be compressed enough to fit multiple rows in each page. There is a hard limit on how small KEY_BLOCK_SIZE can be for a table, based on the lengths of the key columns for each of its indexes. Specify a value that is too small, and the CREATE TABLE or ALTER TABLE statement fails.

In the buffer pool, the compressed data is held in small pages, with a page size based on the KEY_BLOCK_SIZE value. For extracting or updating the column values, MySQL also creates an uncompressed page in the buffer pool with the uncompressed data. Within the buffer pool, any updates to the uncompressed page are also re-written back to the equivalent compressed page. You might need to size your buffer pool to accommodate the additional data of both compressed and uncompressed pages, although the uncompressed pages are evicted from the buffer pool when space is needed, and then uncompressed again on the next access.

15.9.1.2 Creating Compressed Tables

Compressed tables can be created in file-per-table tablespaces or in general tablespaces. Table compression is not available for the InnoDB system tablespace. The system tablespace (space 0, the .ibdata files) can contain user-created tables, but it also contains internal system data, which is never compressed. Thus, compression applies only to tables (and indexes) stored in file-per-table or general tablespaces.

Creating a Compressed Table in File-Per-Table Tablespace

To create a compressed table in a file-per-table tablespace, innodb_file_per_table must be enabled (the default in MySQL 5.6.6) and innodb_file_format must be set to Barracuda. You can set these parameters in the MySQL configuration file (my.cnf or my.ini) or dynamically, using a SET statement.

After the innodb_file_per_table and innodb_file_format options are configured, specify the ROW_FORMAT=COMPRESSED clause or KEY_BLOCK_SIZE clause, or both, in a CREATE TABLE or ALTER TABLE statement to create a compressed table in a file-per-table tablespace.

For example, you might use the following statements:

SET GLOBAL innodb_file_per_table=1;
SET GLOBAL innodb_file_format=Barracuda;
CREATE TABLE t1
 (c1 INT PRIMARY KEY)
 ROW_FORMAT=COMPRESSED
KEY_BLOCK_SIZE=8;
Creating a Compressed Table in a General Tablespace

To create a compressed table in a general tablespace, FILE_BLOCK_SIZE must be defined for the general tablespace, which is specified when the tablespace is created. The FILE_BLOCK_SIZE value must be a valid compressed page size in relation to the innodb_page_size value, and the page size of the compressed table, defined by the CREATE TABLE or ALTER TABLE KEY_BLOCK_SIZE clause, must be equal to FILE_BLOCK_SIZE/1024. For example, if innodb_page_size=16384 and FILE_BLOCK_SIZE=8192, the KEY_BLOCK_SIZE of the table must be 8. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

The following example demonstrates creating a general tablespace and adding a compressed table. The example assumes a default innodb_page_size of 16K. The FILE_BLOCK_SIZE of 8192 requires that the compressed table have a KEY_BLOCK_SIZE of 8.

mysql> CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' FILE_BLOCK_SIZE = 8192 Engine=InnoDB;
Query OK, 0 rows affected (0.01 sec)

mysql> CREATE TABLE t4 (c1 INT PRIMARY KEY) TABLESPACE ts2 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
Query OK, 0 rows affected (0.00 sec)
Notes
  • If you specify ROW_FORMAT=COMPRESSED, you can omit KEY_BLOCK_SIZE; the KEY_BLOCK_SIZE setting defaults to half the innodb_page_size value.

  • If you specify a valid KEY_BLOCK_SIZE value, you can omit ROW_FORMAT=COMPRESSED; compression is enabled automatically.

  • To determine the best value for KEY_BLOCK_SIZE, typically you create several copies of the same table with different values for this clause, then measure the size of the resulting .ibd files and see how well each performs with a realistic workload. For general tablespaces, keep in mind that dropping a table does not reduce the size of the general tablespace .ibd file, nor does it return disk space to the operating system. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

  • The KEY_BLOCK_SIZE value is treated as a hint; a different size could be used by InnoDB if necessary. For file-per-table tablespaces, the KEY_BLOCK_SIZE can only be less than or equal to the innodb_page_size value. If you specify a value greater than the innodb_page_size value, the specified value is ignored, a warning is issued, and KEY_BLOCK_SIZE is set to half of the innodb_page_size value. If innodb_strict_mode=ON, specifying an invalid KEY_BLOCK_SIZE value returns an error. For general tablespaces, valid KEY_BLOCK_SIZE values depend on the FILE_BLOCK_SIZE setting of the tablespace. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

  • 32k and 64k page sizes do not support compression. For more information, refer to the innodb_page_size documentation.

  • The default uncompressed size of InnoDB data pages is 16KB. Depending on the combination of option values, MySQL uses a page size of 1KB, 2KB, 4KB, 8KB, or 16KB for the tablespace data file (.ibd file). The actual compression algorithm is not affected by the KEY_BLOCK_SIZE value; the value determines how large each compressed chunk is, which in turn affects how many rows can be packed into each compressed page.

  • When creating a compressed table in a file-per-table tablespace, setting KEY_BLOCK_SIZE equal to the InnoDB page size does not typically result in much compression. For example, setting KEY_BLOCK_SIZE=16 typically would not result in much compression, since the normal InnoDB page size is 16KB. This setting may still be useful for tables with many long BLOB, VARCHAR or TEXT columns, because such values often do compress well, and might therefore require fewer overflow pages as described in Section 15.9.1.5, “How Compression Works for InnoDB Tables”. For general tablespaces, a KEY_BLOCK_SIZE value equal to the InnoDB page size is not permitted. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

  • All indexes of a table (including the clustered index) are compressed using the same page size, as specified in the CREATE TABLE or ALTER TABLE statement. Table attributes such as ROW_FORMAT and KEY_BLOCK_SIZE are not part of the CREATE INDEX syntax for InnoDB tables, and are ignored if they are specified (although, if specified, they will appear in the output of the SHOW CREATE TABLE statement).

  • For performance-related configuration options, see Section 15.9.1.3, “Tuning Compression for InnoDB Tables”.

Restrictions on Compressed Tables
  • MySQL versions prior to 5.1 cannot process compressed tables.

  • Compressed tables cannot be stored in the InnoDB system tablespace.

  • General tablespaces can contain multiple tables, but compressed and uncompressed tables cannot coexist within the same general tablespace.

  • Compression applies to an entire table and all its associated indexes, not to individual rows, despite the clause name ROW_FORMAT.

15.9.1.3 Tuning Compression for InnoDB Tables

Most often, the internal optimizations described in InnoDB Data Storage and Compression ensure that the system runs well with compressed data. However, because the efficiency of compression depends on the nature of your data, you can make decisions that affect the performance of compressed tables:

  • Which tables to compress.

  • What compressed page size to use.

  • Whether to adjust the size of the buffer pool based on run-time performance characteristics, such as the amount of time the system spends compressing and uncompressing data. Whether the workload is more like a data warehouse (primarily queries) or an OLTP system (mix of queries and DML).

  • If the system performs DML operations on compressed tables, and the way the data is distributed leads to expensive compression failures at runtime, you might adjust additional advanced configuration options.

Use the guidelines in this section to help make those architectural and configuration choices. When you are ready to conduct long-term testing and put compressed tables into production, see Section 15.9.1.4, “Monitoring InnoDB Table Compression at Runtime” for ways to verify the effectiveness of those choices under real-world conditions.

When to Use Compression

In general, compression works best on tables that include a reasonable number of character string columns and where the data is read far more often than it is written. Because there are no guaranteed ways to predict whether or not compression benefits a particular situation, always test with a specific workload and data set running on a representative configuration. Consider the following factors when deciding which tables to compress.

Data Characteristics and Compression

A key determinant of the efficiency of compression in reducing the size of data files is the nature of the data itself. Recall that compression works by identifying repeated strings of bytes in a block of data. Completely randomized data is the worst case. Typical data often has repeated values, and so compresses effectively. Character strings often compress well, whether defined in CHAR, VARCHAR, TEXT or BLOB columns. On the other hand, tables containing mostly binary data (integers or floating point numbers) or data that is previously compressed (for example JPEG or PNG images) may not generally compress well, significantly or at all.

You choose whether to turn on compression for each InnoDB table. A table and all of its indexes use the same (compressed) page size. It might be that the primary key (clustered) index, which contains the data for all columns of a table, compresses more effectively than the secondary indexes. For those cases where there are long rows, the use of compression might result in long column values being stored off-page, as discussed in Section 15.11.3, “DYNAMIC and COMPRESSED Row Formats”. Those overflow pages may compress well. Given these considerations, for many applications, some tables compress more effectively than others, and you might find that your workload performs best only with a subset of tables compressed.

To determine whether or not to compress a particular table, conduct experiments. You can get a rough estimate of how efficiently your data can be compressed by using a utility that implements LZ77 compression (such as gzip or WinZip) on a copy of the .ibd file for an uncompressed table. You can expect less compression from a MySQL compressed table than from file-based compression tools, because MySQL compresses data in chunks based on the page size, 16KB by default. In addition to user data, the page format includes some internal system data that is not compressed. File-based compression utilities can examine much larger chunks of data, and so might find more repeated strings in a huge file than MySQL can find in an individual page.

Another way to test compression on a specific table is to copy some data from your uncompressed table to a similar, compressed table (having all the same indexes) in a file-per-table tablespace and look at the size of the resulting .ibd file. For example:

use test;
set global innodb_file_per_table=1;
set global innodb_file_format=Barracuda;
set global autocommit=0;

-- Create an uncompressed table with a million or two rows.
create table big_table as select * from information_schema.columns;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
commit;
alter table big_table add id int unsigned not null primary key auto_increment;

show create table big_table\G

select count(id) from big_table;

-- Check how much space is needed for the uncompressed table.
\! ls -l data/test/big_table.ibd

create table key_block_size_4 like big_table;
alter table key_block_size_4 key_block_size=4 row_format=compressed;

insert into key_block_size_4 select * from big_table;
commit;

-- Check how much space is needed for a compressed table
-- with particular compression settings.
\! ls -l data/test/key_block_size_4.ibd

This experiment produced the following numbers, which of course could vary considerably depending on your table structure and data:

-rw-rw----  1 cirrus  staff  310378496 Jan  9 13:44 data/test/big_table.ibd
-rw-rw----  1 cirrus  staff  83886080 Jan  9 15:10 data/test/key_block_size_4.ibd

To see whether compression is efficient for your particular workload:

Database Compression versus Application Compression

Decide whether to compress data in your application or in the table; do not use both types of compression for the same data. When you compress the data in the application and store the results in a compressed table, extra space savings are extremely unlikely, and the double compression just wastes CPU cycles.

Compressing in the Database

When enabled, MySQL table compression is automatic and applies to all columns and index values. The columns can still be tested with operators such as LIKE, and sort operations can still use indexes even when the index values are compressed. Because indexes are often a significant fraction of the total size of a database, compression could result in significant savings in storage, I/O or processor time. The compression and decompression operations happen on the database server, which likely is a powerful system that is sized to handle the expected load.

Compressing in the Application

If you compress data such as text in your application, before it is inserted into the database, You might save overhead for data that does not compress well by compressing some columns and not others. This approach uses CPU cycles for compression and uncompression on the client machine rather than the database server, which might be appropriate for a distributed application with many clients, or where the client machine has spare CPU cycles.

Hybrid Approach

Of course, it is possible to combine these approaches. For some applications, it may be appropriate to use some compressed tables and some uncompressed tables. It may be best to externally compress some data (and store it in uncompressed tables) and allow MySQL to compress (some of) the other tables in the application. As always, up-front design and real-life testing are valuable in reaching the right decision.

Workload Characteristics and Compression

In addition to choosing which tables to compress (and the page size), the workload is another key determinant of performance. If the application is dominated by reads, rather than updates, fewer pages need to be reorganized and recompressed after the index page runs out of room for the per-page modification log that MySQL maintains for compressed data. If the updates predominantly change non-indexed columns or those containing BLOBs or large strings that happen to be stored off-page, the overhead of compression may be acceptable. If the only changes to a table are INSERTs that use a monotonically increasing primary key, and there are few secondary indexes, there is little need to reorganize and recompress index pages. Since MySQL can delete-mark and delete rows on compressed pages in place by modifying uncompressed data, DELETE operations on a table are relatively efficient.

For some environments, the time it takes to load data can be as important as run-time retrieval. Especially in data warehouse environments, many tables may be read-only or read-mostly. In those cases, it might or might not be acceptable to pay the price of compression in terms of increased load time, unless the resulting savings in fewer disk reads or in storage cost is significant.

Fundamentally, compression works best when the CPU time is available for compressing and uncompressing data. Thus, if your workload is I/O bound, rather than CPU-bound, you might find that compression can improve overall performance. When you test your application performance with different compression configurations, test on a platform similar to the planned configuration of the production system.

Configuration Characteristics and Compression

Reading and writing database pages from and to disk is the slowest aspect of system performance. Compression attempts to reduce I/O by using CPU time to compress and uncompress data, and is most effective when I/O is a relatively scarce resource compared to processor cycles.

This is often especially the case when running in a multi-user environment with fast, multi-core CPUs. When a page of a compressed table is in memory, MySQL often uses additional memory, typically 16KB, in the buffer pool for an uncompressed copy of the page. The adaptive LRU algorithm attempts to balance the use of memory between compressed and uncompressed pages to take into account whether the workload is running in an I/O-bound or CPU-bound manner. Still, a configuration with more memory dedicated to the buffer pool tends to run better when using compressed tables than a configuration where memory is highly constrained.

Choosing the Compressed Page Size

The optimal setting of the compressed page size depends on the type and distribution of data that the table and its indexes contain. The compressed page size should always be bigger than the maximum record size, or operations may fail as noted in Compression of B-Tree Pages.

Setting the compressed page size too large wastes some space, but the pages do not have to be compressed as often. If the compressed page size is set too small, inserts or updates may require time-consuming recompression, and the B-tree nodes may have to be split more frequently, leading to bigger data files and less efficient indexing.

Typically, you set the compressed page size to 8K or 4K bytes. Given that the maximum row size for an InnoDB table is around 8K, KEY_BLOCK_SIZE=8 is usually a safe choice.

15.9.1.4 Monitoring InnoDB Table Compression at Runtime

Overall application performance, CPU and I/O utilization and the size of disk files are good indicators of how effective compression is for your application. This section builds on the performance tuning advice from Section 15.9.1.3, “Tuning Compression for InnoDB Tables”, and shows how to find problems that might not turn up during initial testing.

To dig deeper into performance considerations for compressed tables, you can monitor compression performance at runtime using the Information Schema tables described in Example 15.10, “Using the Compression Information Schema Tables”. These tables reflect the internal use of memory and the rates of compression used overall.

The INNODB_CMP table reports information about compression activity for each compressed page size (KEY_BLOCK_SIZE) in use. The information in these tables is system-wide: it summarizes the compression statistics across all compressed tables in your database. You can use this data to help decide whether or not to compress a table by examining these tables when no other compressed tables are being accessed. It involves relatively low overhead on the server, so you might query it periodically on a production server to check the overall efficiency of the compression feature.

The INNODB_CMP_PER_INDEX table reports information about compression activity for individual tables and indexes. This information is more targeted and more useful for evaluating compression efficiency and diagnosing performance issues one table or index at a time. (Because that each InnoDB table is represented as a clustered index, MySQL does not make a big distinction between tables and indexes in this context.) The INNODB_CMP_PER_INDEX table does involve substantial overhead, so it is more suitable for development servers, where you can compare the effects of different workloads, data, and compression settings in isolation. To guard against imposing this monitoring overhead by accident, you must enable the innodb_cmp_per_index_enabled configuration option before you can query the INNODB_CMP_PER_INDEX table.

The key statistics to consider are the number of, and amount of time spent performing, compression and uncompression operations. Since MySQL splits B-tree nodes when they are too full to contain the compressed data following a modification, compare the number of successful compression operations with the number of such operations overall. Based on the information in the INNODB_CMP and INNODB_CMP_PER_INDEX tables and overall application performance and hardware resource utilization, you might make changes in your hardware configuration, adjust the size of the buffer pool, choose a different page size, or select a different set of tables to compress.

If the amount of CPU time required for compressing and uncompressing is high, changing to faster or multi-core CPUs can help improve performance with the same data, application workload and set of compressed tables. Increasing the size of the buffer pool might also help performance, so that more uncompressed pages can stay in memory, reducing the need to uncompress pages that exist in memory only in compressed form.

A large number of compression operations overall (compared to the number of INSERT, UPDATE and DELETE operations in your application and the size of the database) could indicate that some of your compressed tables are being updated too heavily for effective compression. If so, choose a larger page size, or be more selective about which tables you compress.

If the number of successful compression operations (COMPRESS_OPS_OK) is a high percentage of the total number of compression operations (COMPRESS_OPS), then the system is likely performing well. If the ratio is low, then MySQL is reorganizing, recompressing, and splitting B-tree nodes more often than is desirable. In this case, avoid compressing some tables, or increase KEY_BLOCK_SIZE for some of the compressed tables. You might turn off compression for tables that cause the number of compression failures in your application to be more than 1% or 2% of the total. (Such a failure ratio might be acceptable during a temporary operation such as a data load).

15.9.1.5 How Compression Works for InnoDB Tables

This section describes some internal implementation details about compression for InnoDB tables. The information presented here may be helpful in tuning for performance, but is not necessary to know for basic use of compression.

Compression Algorithms

Some operating systems implement compression at the file system level. Files are typically divided into fixed-size blocks that are compressed into variable-size blocks, which easily leads into fragmentation. Every time something inside a block is modified, the whole block is recompressed before it is written to disk. These properties make this compression technique unsuitable for use in an update-intensive database system.

MySQL implements compression with the help of the well-known zlib library, which implements the LZ77 compression algorithm. This compression algorithm is mature, robust, and efficient in both CPU utilization and in reduction of data size. The algorithm is lossless, so that the original uncompressed data can always be reconstructed from the compressed form. LZ77 compression works by finding sequences of data that are repeated within the data to be compressed. The patterns of values in your data determine how well it compresses, but typical user data often compresses by 50% or more.

Unlike compression performed by an application, or compression features of some other database management systems, InnoDB compression applies both to user data and to indexes. In many cases, indexes can constitute 40-50% or more of the total database size, so this difference is significant. When compression is working well for a data set, the size of the InnoDB data files (the file-per-table tablespace or general tablespace .idb files) is 25% to 50% of the uncompressed size or possibly smaller. Depending on the workload, this smaller database can in turn lead to a reduction in I/O, and an increase in throughput, at a modest cost in terms of increased CPU utilization. You can adjust the balance between compression level and CPU overhead by modifying the innodb_compression_level configuration option.

InnoDB Data Storage and Compression

All user data in InnoDB tables is stored in pages comprising a B-tree index (the clustered index). In some other database systems, this type of index is called an index-organized table. Each row in the index node contains the values of the (user-specified or system-generated) primary key and all the other columns of the table.

Secondary indexes in InnoDB tables are also B-trees, containing pairs of values: the index key and a pointer to a row in the clustered index. The pointer is in fact the value of the primary key of the table, which is used to access the clustered index if columns other than the index key and primary key are required. Secondary index records must always fit on a single B-tree page.

The compression of B-tree nodes (of both clustered and secondary indexes) is handled differently from compression of overflow pages used to store long VARCHAR, BLOB, or TEXT columns, as explained in the following sections.

Compression of B-Tree Pages

Because they are frequently updated, B-tree pages require special treatment. It is important to minimize the number of times B-tree nodes are split, as well as to minimize the need to uncompress and recompress their content.

One technique MySQL uses is to maintain some system information in the B-tree node in uncompressed form, thus facilitating certain in-place updates. For example, this allows rows to be delete-marked and deleted without any compression operation.

In addition, MySQL attempts to avoid unnecessary uncompression and recompression of index pages when they are changed. Within each B-tree page, the system keeps an uncompressed modification log to record changes made to the page. Updates and inserts of small records may be written to this modification log without requiring the entire page to be completely reconstructed.

When the space for the modification log runs out, InnoDB uncompresses the page, applies the changes and recompresses the page. If recompression fails (a situation known as a compression failure), the B-tree nodes are split and the process is repeated until the update or insert succeeds.

To avoid frequent compression failures in write-intensive workloads, such as for OLTP applications, MySQL sometimes reserves some empty space (padding) in the page, so that the modification log fills up sooner and the page is recompressed while there is still enough room to avoid splitting it. The amount of padding space left in each page varies as the system keeps track of the frequency of page splits. On a busy server doing frequent writes to compressed tables, you can adjust the innodb_compression_failure_threshold_pct, and innodb_compression_pad_pct_max configuration options to fine-tune this mechanism.

Generally, MySQL requires that each B-tree page in an InnoDB table can accommodate at least two records. For compressed tables, this requirement has been relaxed. Leaf pages of B-tree nodes (whether of the primary key or secondary indexes) only need to accommodate one record, but that record must fit, in uncompressed form, in the per-page modification log. If innodb_strict_mode is ON, MySQL checks the maximum row size during CREATE TABLE or CREATE INDEX. If the row does not fit, the following error message is issued: ERROR HY000: Too big row.

If you create a table when innodb_strict_mode is OFF, and a subsequent INSERT or UPDATE statement attempts to create an index entry that does not fit in the size of the compressed page, the operation fails with ERROR 42000: Row size too large. (This error message does not name the index for which the record is too large, or mention the length of the index record or the maximum record size on that particular index page.) To solve this problem, rebuild the table with ALTER TABLE and select a larger compressed page size (KEY_BLOCK_SIZE), shorten any column prefix indexes, or disable compression entirely with ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPACT.

innodb_strict_mode is not applicable to general tablespaces, which also support compressed tables. Tablespace management rules for general tablespaces are strictly enforced independently of innodb_strict_mode. For more information, see Section 14.1.19, “CREATE TABLESPACE Syntax”.

Compressing BLOB, VARCHAR, and TEXT Columns

In an InnoDB table, BLOB, VARCHAR, and TEXT columns that are not part of the primary key may be stored on separately allocated overflow pages. We refer to these columns as off-page columns. Their values are stored on singly-linked lists of overflow pages.

For tables created in ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, the values of BLOB, TEXT, or VARCHAR columns may be stored fully off-page, depending on their length and the length of the entire row. For columns that are stored off-page, the clustered index record only contains 20-byte pointers to the overflow pages, one per column. Whether any columns are stored off-page depends on the page size and the total size of the row. When the row is too long to fit entirely within the page of the clustered index, MySQL chooses the longest columns for off-page storage until the row fits on the clustered index page. As noted above, if a row does not fit by itself on a compressed page, an error occurs.

Note

For tables created in ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, TEXT and BLOB columns that are less than or equal to 40 bytes are always stored in-line.

Tables created in older versions of MySQL use the Antelope file format, which supports only ROW_FORMAT=REDUNDANT and ROW_FORMAT=COMPACT. In these formats, MySQL stores the first 768 bytes of BLOB, VARCHAR, and TEXT columns in the clustered index record along with the primary key. The 768-byte prefix is followed by a 20-byte pointer to the overflow pages that contain the rest of the column value.

When a table is in COMPRESSED format, all data written to overflow pages is compressed as is; that is, MySQL applies the zlib compression algorithm to the entire data item. Other than the data, compressed overflow pages contain an uncompressed header and trailer comprising a page checksum and a link to the next overflow page, among other things. Therefore, very significant storage savings can be obtained for longer BLOB, TEXT, or VARCHAR columns if the data is highly compressible, as is often the case with text data. Image data, such as JPEG, is typically already compressed and so does not benefit much from being stored in a compressed table; the double compression can waste CPU cycles for little or no space savings.

The overflow pages are of the same size as other pages. A row containing ten columns stored off-page occupies ten overflow pages, even if the total length of the columns is only 8K bytes. In an uncompressed table, ten uncompressed overflow pages occupy 160K bytes. In a compressed table with an 8K page size, they occupy only 80K bytes. Thus, it is often more efficient to use compressed table format for tables with long column values.

For file-per-table tablespaces, using a 16K compressed page size can reduce storage and I/O costs for BLOB, VARCHAR, or TEXT columns, because such data often compress well, and might therefore require fewer overflow pages, even though the B-tree nodes themselves take as many pages as in the uncompressed form. General tablespaces do not support a 16K compressed page size (KEY_BLOCK_SIZE). For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

Compression and the InnoDB Buffer Pool

In a compressed InnoDB table, every compressed page (whether 1K, 2K, 4K or 8K) corresponds to an uncompressed page of 16K bytes (or a smaller size if innodb_page_size is set). To access the data in a page, MySQL reads the compressed page from disk if it is not already in the buffer pool, then uncompresses the page to its original form. This section describes how InnoDB manages the buffer pool with respect to pages of compressed tables.

To minimize I/O and to reduce the need to uncompress a page, at times the buffer pool contains both the compressed and uncompressed form of a database page. To make room for other required database pages, MySQL can evict from the buffer pool an uncompressed page, while leaving the compressed page in memory. Or, if a page has not been accessed in a while, the compressed form of the page might be written to disk, to free space for other data. Thus, at any given time, the buffer pool might contain both the compressed and uncompressed forms of the page, or only the compressed form of the page, or neither.

MySQL keeps track of which pages to keep in memory and which to evict using a least-recently-used (LRU) list, so that hot (frequently accessed) data tends to stay in memory. When compressed tables are accessed, MySQL uses an adaptive LRU algorithm to achieve an appropriate balance of compressed and uncompressed pages in memory. This adaptive algorithm is sensitive to whether the system is running in an I/O-bound or CPU-bound manner. The goal is to avoid spending too much processing time uncompressing pages when the CPU is busy, and to avoid doing excess I/O when the CPU has spare cycles that can be used for uncompressing compressed pages (that may already be in memory). When the system is I/O-bound, the algorithm prefers to evict the uncompressed copy of a page rather than both copies, to make more room for other disk pages to become memory resident. When the system is CPU-bound, MySQL prefers to evict both the compressed and uncompressed page, so that more memory can be used for hot pages and reducing the need to uncompress data in memory only in compressed form.

Compression and the InnoDB Redo Log Files

Before a compressed page is written to a data file, MySQL writes a copy of the page to the redo log (if it has been recompressed since the last time it was written to the database). This is done to ensure that redo logs are usable for crash recovery, even in the unlikely case that the zlib library is upgraded and that change introduces a compatibility problem with the compressed data. Therefore, some increase in the size of log files, or a need for more frequent checkpoints, can be expected when using compression. The amount of increase in the log file size or checkpoint frequency depends on the number of times compressed pages are modified in a way that requires reorganization and recompression.

Compressed tables require the Barracuda file format. To create a compressed table in a file-per-table tablespace, innodb_file_per_table must be enabled and innodb_file_format must be set to Barracuda. There is no dependence on the innodb_file_format setting when creating a compressed table in a general tablespace. For more information, see Section 15.7.9, “InnoDB General Tablespaces”. The MySQL Enterprise Backup product supports the Barracuda file format.

15.9.1.6 Compression for OLTP Workloads

Traditionally, the InnoDB compression feature was recommended primarily for read-only or read-mostly workloads, such as in a data warehouse configuration. The rise of SSD storage devices, which are fast but relatively small and expensive, makes compression attractive also for OLTP workloads: high-traffic, interactive web sites can reduce their storage requirements and their I/O operations per second (IOPS) by using compressed tables with applications that do frequent INSERT, UPDATE, and DELETE operations.

Configuration options introduced in MySQL 5.6 let you adjust the way compression works for a particular MySQL instance, with an emphasis on performance and scalability for write-intensive operations:

  • innodb_compression_level lets you turn the degree of compression up or down. A higher value lets you fit more data onto a storage device, at the expense of more CPU overhead during compression. A lower value lets you reduce CPU overhead when storage space is not critical, or you expect the data is not especially compressible.

  • innodb_compression_failure_threshold_pct specifies a cutoff point for compression failures during updates to a compressed table. When this threshold is passed, MySQL begins to leave additional free space within each new compressed page, dynamically adjusting the amount of free space up to the percentage of page size specified by innodb_compression_pad_pct_max

  • innodb_compression_pad_pct_max lets you adjust the maximum amount of space reserved within each page to record changes to compressed rows, without needing to compress the entire page again. The higher the value, the more changes can be recorded without recompressing the page. MySQL uses a variable amount of free space for the pages within each compressed table, only when a designated percentage of compression operations fail at runtime, requiring an expensive operation to split the compressed page.

  • innodb_log_compressed_pages lets you disable writing of images of re-compressed pages to the redo log. Re-compression may occur when changes are made to compressed data. This option is enabled by default to prevent corruption that could occur if a different version of the zlib compression algorithm is used during recovery. If you are certain that the zlib version will not change, disable innodb_log_compressed_pages to reduce redo log generation for workloads that modify compressed data.

Because working with compressed data sometimes involves keeping both compressed and uncompressed versions of a page in memory at the same time, when using compression with an OLTP-style workload, be prepared to increase the value of the innodb_buffer_pool_size configuration option.

15.9.1.7 SQL Compression Syntax Warnings and Errors

This section describes syntax warnings and errors that you may encounter when using the table compression feature with file-per-table tablespaces and general tablespaces.

SQL Compression Syntax Warnings and Errors for File-Per-Table Tablespaces

When innodb_strict_mode is enabled (the default), specifying ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE in CREATE TABLE or ALTER TABLE statements produces the following error if innodb_file_per_table is disabled or if innodb_file_format is set to Antelope rather than Barracuda.

ERROR 1031 (HY000): Table storage engine for 't1' doesn't have this option
Note

The table is not created if the current configuration does not permit using compressed tables.

When innodb_strict_mode is disabled, specifying ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE in CREATE TABLE or ALTER TABLE statements produces the following warnings if innodb_file_per_table is disabled.

mysql> SHOW WARNINGS;
+---------+------+---------------------------------------------------------------+
| Level   | Code | Message                                                       |
+---------+------+---------------------------------------------------------------+
| Warning | 1478 | InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.        |
| Warning | 1478 | InnoDB: ignoring KEY_BLOCK_SIZE=4.                            |
| Warning | 1478 | InnoDB: ROW_FORMAT=COMPRESSED requires innodb_file_per_table. |
| Warning | 1478 | InnoDB: assuming ROW_FORMAT=DYNAMIC.                          |
+---------+------+---------------------------------------------------------------+

Similar warnings are issued if innodb_file_format is set to Antelope rather than Barracuda.

Note

These messages are only warnings, not errors, and the table is created without compression, as if the options were not specified.

The non-strict behavior lets you import a mysqldump file into a database that does not support compressed tables, even if the source database contained compressed tables. In that case, MySQL creates the table in ROW_FORMAT=COMPACT instead of preventing the operation.

To import the dump file into a new database, and have the tables re-created as they exist in the original database, ensure the server has the proper settings for the configuration parameters innodb_file_format and innodb_file_per_table.

The attribute KEY_BLOCK_SIZE is permitted only when ROW_FORMAT is specified as COMPRESSED or is omitted. Specifying a KEY_BLOCK_SIZE with any other ROW_FORMAT generates a warning that you can view with SHOW WARNINGS. However, the table is non-compressed; the specified KEY_BLOCK_SIZE is ignored).

LevelCodeMessage
Warning1478 InnoDB: ignoring KEY_BLOCK_SIZE=n unless ROW_FORMAT=COMPRESSED.

If you are running with innodb_strict_mode enabled, the combination of a KEY_BLOCK_SIZE with any ROW_FORMAT other than COMPRESSED generates an error, not a warning, and the table is not created.

Table 15.7, “ROW_FORMAT and KEY_BLOCK_SIZE Options” provides an overview the ROW_FORMAT and KEY_BLOCK_SIZE options that are used with CREATE TABLE or ALTER TABLE.

Table 15.7 ROW_FORMAT and KEY_BLOCK_SIZE Options

OptionUsage NotesDescription
ROW_FORMAT=​REDUNDANTStorage format used prior to MySQL 5.0.3Less efficient than ROW_FORMAT=COMPACT; for backward compatibility
ROW_FORMAT=​COMPACTDefault storage format since MySQL 5.0.3Stores a prefix of 768 bytes of long column values in the clustered index page, with the remaining bytes stored in an overflow page
ROW_FORMAT=​DYNAMICFile-per-table tablespaces require innodb_file​_format=BarracudaStore values within the clustered index page if they fit; if not, stores only a 20-byte pointer to an overflow page (no prefix)
ROW_FORMAT=​COMPRESSEDFile-per-table tablespaces require innodb_file​_format=BarracudaCompresses the table and indexes using zlib
KEY_BLOCK_​SIZE=nFile-per-table tablespaces require innodb_file​_format=BarracudaSpecifies compressed page size of 1, 2, 4, 8 or 16 kilobytes; implies ROW_FORMAT=COMPRESSED. For general tablespaces, a KEY_BLOCK_SIZE value equal to the InnoDB page size is not permitted.

Table 15.8, “CREATE/ALTER TABLE Warnings and Errors when InnoDB Strict Mode is OFF” summarizes error conditions that occur with certain combinations of configuration parameters and options on the CREATE TABLE or ALTER TABLE statements, and how the options appear in the output of SHOW TABLE STATUS.

When innodb_strict_mode is OFF, MySQL creates or alters the table, but ignores certain settings as shown below. You can see the warning messages in the MySQL error log. When innodb_strict_mode is ON, these specified combinations of options generate errors, and the table is not created or altered. To see the full description of the error condition, issue the SHOW ERRORS statement: example:

mysql> CREATE TABLE x (id INT PRIMARY KEY, c INT)

-> ENGINE=INNODB KEY_BLOCK_SIZE=33333;

ERROR 1005 (HY000): Can't create table 'test.x' (errno: 1478)

mysql> SHOW ERRORS;
+-------+------+-------------------------------------------+
| Level | Code | Message                                   |
+-------+------+-------------------------------------------+
| Error | 1478 | InnoDB: invalid KEY_BLOCK_SIZE=33333.     |
| Error | 1005 | Can't create table 'test.x' (errno: 1478) |
+-------+------+-------------------------------------------+

Table 15.8 CREATE/ALTER TABLE Warnings and Errors when InnoDB Strict Mode is OFF

SyntaxWarning or Error ConditionResulting ROW_FORMAT, as shown in SHOW TABLE STATUS
ROW_FORMAT=REDUNDANTNoneREDUNDANT
ROW_FORMAT=COMPACTNoneCOMPACT
ROW_FORMAT=COMPRESSED or ROW_FORMAT=DYNAMIC or KEY_BLOCK_SIZE is specifiedIgnored for file-per-table tablespaces unless both innodb_file_format=Barracuda and innodb_file_per_table are enabled. General tablespaces support all row formats (with some restrictions) regardless of innodb_file_format and innodb_file_per_table settings. See Section 15.7.9, “InnoDB General Tablespaces”.the default row format for file-per-table tablespaces; the specified row format for general tablespaces
Invalid KEY_BLOCK_SIZE is specified (not 1, 2, 4, 8 or 16)KEY_BLOCK_SIZE is ignoredthe specified row format, or the default row format
ROW_FORMAT=COMPRESSED and valid KEY_BLOCK_SIZE are specifiedNone; KEY_BLOCK_SIZE specified is usedCOMPRESSED
KEY_BLOCK_SIZE is specified with REDUNDANT, COMPACT or DYNAMIC row formatKEY_BLOCK_SIZE is ignoredREDUNDANT, COMPACT or DYNAMIC
ROW_FORMAT is not one of REDUNDANT, COMPACT, DYNAMIC or COMPRESSEDIgnored if recognized by the MySQL parser. Otherwise, an error is issued.the default row format or N/A

When innodb_strict_mode is ON, MySQL rejects invalid ROW_FORMAT or KEY_BLOCK_SIZE parameters and issues errors. When innodb_strict_mode is OFF, MySQL issues warnings instead of errors for ignored invalid parameters. innodb_strict_mode is ON by default.

When innodb_strict_mode is ON, MySQL rejects invalid ROW_FORMAT or KEY_BLOCK_SIZE parameters. For compatibility with earlier versions of MySQL, strict mode is not enabled by default; instead, MySQL issues warnings (not errors) for ignored invalid parameters.

It is not possible to see the chosen KEY_BLOCK_SIZE using SHOW TABLE STATUS. The statement SHOW CREATE TABLE displays the KEY_BLOCK_SIZE (even if it was ignored when creating the table). The real compressed page size of the table cannot be displayed by MySQL.

SQL Compression Syntax Warnings and Errors for General Tablespaces
  • If FILE_BLOCK_SIZE was not defined for the general tablespace when the tablespace was created, the tablespace cannot contain compressed tables. If you attempt to add a compressed table, an error is returned, as shown in the following example:

    mysql> CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' Engine=InnoDB;
    Query OK, 0 rows affected (0.01 sec)
    
    mysql> CREATE TABLE t1 (c1 INT PRIMARY KEY) TABLESPACE ts1 ROW_FORMAT=COMPRESSED
    KEY_BLOCK_SIZE=8;
    ERROR 1478 (HY000): InnoDB: Tablespace `ts1` cannot contain a COMPRESSED table
  • Attempting to add a table with an invalid KEY_BLOCK_SIZE to a general tablespace returns an error, as shown in the following example:

    mysql> CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' FILE_BLOCK_SIZE = 8192 Engine=InnoDB;
    Query OK, 0 rows affected (0.01 sec)
      
    mysql> CREATE TABLE t2 (c1 INT PRIMARY KEY) TABLESPACE ts2 ROW_FORMAT=COMPRESSED
    KEY_BLOCK_SIZE=4;
    ERROR 1478 (HY000): InnoDB: Tablespace `ts2` uses block size 8192 and cannot
    contain a table with physical page size 4096

    For general tablespaces, the KEY_BLOCK_SIZE of the table must be equal to the FILE_BLOCK_SIZE of the tablespace divided by 1024. For example, if the FILE_BLOCK_SIZE of the tablespace is 8192, the KEY_BLOCK_SIZE of the table must be 8.

  • Attempting to add a table with an uncompressed row format to a general tablespace configured to store compressed tables returns an error, as shown in the following example:

    mysql> CREATE TABLESPACE `ts3` ADD DATAFILE 'ts3.ibd' FILE_BLOCK_SIZE = 8192 Engine=InnoDB;
    Query OK, 0 rows affected (0.01 sec)
    
    mysql> CREATE TABLE t3 (c1 INT PRIMARY KEY) TABLESPACE ts3 ROW_FORMAT=COMPACT;
    ERROR 1478 (HY000): InnoDB: Tablespace `ts3` uses block size 8192 and cannot
    contain a table with physical page size 16384

innodb_strict_mode is not applicable to general tablespaces. Tablespace management rules for general tablespaces are strictly enforced independently of innodb_strict_mode. For more information, see Section 14.1.19, “CREATE TABLESPACE Syntax”.

For more information about using compressed tables with general tablespaces, see Section 15.7.9, “InnoDB General Tablespaces”.

15.9.2 InnoDB Page Compression

InnoDB supports page-level compression for tables that reside in file-per-table tablespaces. This feature is referred to as Transparent Page Compression. Page compression is enabled by specifying the COMPRESSION attribute with CREATE TABLE or ALTER TABLE. Supported compression algorithms include Zlib and LZ4.

Supported Platforms

Page compression requires sparse file and hole punching support. Page compression is supported on Windows with NTFS, and on the following subset of MySQL-supported Linux platforms where the kernel level provides hole punching support:

  • RHEL 7 and derived distributions that use kernel version 3.10.0-123 or higher

  • OEL 5.10 (UEK2) kernel version 2.6.39 or higher

  • OEL 6.5 (UEK3) kernel version 3.8.13 or higher

  • OEL 7.0 kernel version 3.8.13 or higher

  • SLE11 kernel version 3.0-x

  • SLE12 kernel version 3.12-x

  • OES11 kernel version 3.0-x

  • Ubuntu 14.0.4 LTS kernel version 3.13 or higher

  • Ubuntu 12.0.4 LTS kernel version 3.2 or higher

  • Debian 7 kernel version 3.2 or higher

Note

All of the available file systems for a given Linux distribution may not support hole punching.

How Page Compression Works

When a page is written, it is compressed using the specified compression algorithm. The compressed data is written to disk, where the hole punching mechanism releases empty blocks from the end of the page. If compression fails, data is written out as-is.

Hole Punch Size on Linux

On Linux systems, the file system block size is the unit size used for hole punching. Therefore, page compression only works if page data can be compressed to a size that is less than or equal to the InnoDB page size minus the file system block size. For example, if innodb_page_size=16K and the file system block size is 4K, page data must compress to less than or equal to 12K to make hole punching possible.

Hole Punch Size on Windows

On Windows systems, the underlying infrastructure for sparse files is based on NTFS compression. Hole punching size is the NTFS compression unit, which is 16 times the NTFS cluster size. Cluster sizes and their compression units are shown in the following table:

Table 15.9 Windows NTFS Cluster Size and Compression Units

Cluster SizeCompression Unit
512 Bytes8 KB
1 KB16 KB
2 KB32 KB
4 KB64 KB

Page compression on Windows systems only works if page data can be compressed to a size that is less than or equal to the InnoDB page size minus the compression unit size.

The default NTFS cluster size is 4K, for which the compression unit size is 64K. This means that page compression has no benefit for an out-of-the box Windows NTFS configuration, as the maximum innodb_page_size is also 64K.

For page compression to work on Windows, the file system must be created with a cluster size smaller than 4K, and the innodb_page_size must be at least twice the size of the compression unit. For example, for page compression to work on Windows, you could build the file system with a cluster size of 512 Bytes (which has a compression unit of 8KB) and initialize InnoDB with an innodb_page_size value of 16K or greater.

Enabling Page Compression

To enable page compression, specify the COMPRESSION attribute in the CREATE TABLE statement. For example:

CREATE TABLE t1 (c1 INT) COMPRESSION="zlib";

You can also enable page compression in an ALTER TABLE statement. However, ALTER TABLE ... COMPRESSION only updates the tablespace compression attribute. Writes to the tablespace that occur after setting the new compression algorithm use the new setting, but to apply the new compression algorithm to existing pages, you must rebuild the table using OPTIMIZE TABLE.

ALTER TABLE t1 COMPRESSION="zlib";
OPTIMIZE TABLE t1;

Disabling Page Compression

To disable page compression, set COMPRESSION=None using ALTER TABLE. Writes to the tablespace that occur after setting COMPRESSION=None no longer use page compression. To uncompress existing pages, you must rebuild the table using OPTIMIZE TABLE after setting COMPRESSION=None.

ALTER TABLE t1 COMPRESSION="None";
OPTIMIZE TABLE t1;

Page Compression Metadata

Page compression metadata is found in the INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES table, in the following columns:

  • FS_BLOCK_SIZE: The file system block size, which is the unit size used for hole punching.

  • FILE_SIZE: The apparent size of the file, which represents the maximum size of the file, uncompressed.

  • ALLOCATED_SIZE: The actual size of the file, which is the amount of space allocated on disk.

Note

On Unix-like systems, ls -l tablespace_name.ibd shows the apparent file size (equivalent to FILE_SIZE) in bytes. To view the actual amount of space allocated on disk (equivalent to ALLOCATED_SIZE), use du --block-size=1 tablespace_name.ibd. The --block-size=1 option prints the allocated space in bytes instead of blocks, so that it can be compared to ls -l output.

Use SHOW CREATE TABLE to view the current page compression setting (Zlib, Lz4, or None). A table may contain a mix of pages with different compression settings.

In the following example, page compression metadata for the employees table is retrieved from the INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES table.

# Create the employees table with Zlib page compression

CREATE TABLE employees (
    emp_no      INT             NOT NULL,
    birth_date  DATE            NOT NULL,
    first_name  VARCHAR(14)     NOT NULL,
    last_name   VARCHAR(16)     NOT NULL,
    gender      ENUM ('M','F')  NOT NULL,  
    hire_date   DATE            NOT NULL,
    PRIMARY KEY (emp_no)
) COMPRESSION="zlib";

# Insert data (not shown)
  
# Query page compression metadata in INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES
  
mysql> SELECT SPACE, NAME, FS_BLOCK_SIZE, FILE_SIZE, ALLOCATED_SIZE FROM
INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES WHERE NAME='employees/employees'\G
*************************** 1. row ***************************
SPACE: 45
NAME: employees/employees
FS_BLOCK_SIZE: 4096
FILE_SIZE: 23068672
ALLOCATED_SIZE: 19415040

Page compression metadata for the employees table shows that the apparent file size is 23068672 bytes while the actual file size (with page compression) is 19415040 bytes. The file system block size is 4096 bytes, which is the block size used for hole punching.

Page Compression Limitations and Usage Notes

  • Page compression is disabled if the file system block size (or compression unit size on Windows) * 2 > innodb_page_size.

  • Page compression is not supported for tables that reside in shared tablespaces, which include the system tablespace, the temporary tablespace, and general tablespaces.

  • Page compression is not supported for undo log tablespaces.

  • Page compression is not supported for redo log pages.

  • R-tree pages, which are used for spatial indexes, are not compressed.

  • Pages that belong to compressed tables (ROW_FORMAT=COMPRESSED) are left as-is.

  • During recovery, updated pages are written out in an uncompressed form.

  • Loading a page-compressed tablespace on a server that does not support the compression algorithm that was used causes an I/O error.

  • Before downgrading to an earlier version of MySQL that does not support page compression, uncompress the tables that use the page compression feature. To uncompress a table, run ALTER TABLE ... COMPRESSION=None and OPTIMIZE TABLE.

  • Page-compressed tablespaces can be copied between Linux and Windows servers if the compression algorithm that was used is available on both servers.

  • Preserving page compression when moving a page-compressed tablespace file from one host to another requires a utility that preserves sparse files.

  • Better page compression may be achieved on Fusion-io hardware with NVMFS than on other platforms, as NVMFS is designed to take advantage of punch hole functionality.

  • Using the page compression feature with a large InnoDB page size and relatively small file system block size could result in write amplification. For example, a maximum InnoDB page size of 64KB with a 4KB file system block size may improve compression but may also increase demand on the buffer pool, leading to increased I/O and potential write amplification.

15.10 InnoDB File-Format Management

As InnoDB evolves, data file formats that are not compatible with prior versions of InnoDB are sometimes required to support new features. To help manage compatibility in upgrade and downgrade situations, and systems that run different versions of MySQL, InnoDB uses named file formats. InnoDB currently supports two named file formats, Antelope and Barracuda.

This section discusses enabling InnoDB file formats for new InnoDB tables, verifying compatibility of different file formats between MySQL releases, and identifying the file format in use.

InnoDB file format settings do not apply to tables stored in general tablespaces. General tablespaces provide support for all row formats and associated features. For more information, see Section 15.7.9, “InnoDB General Tablespaces”.

Note

The following file format configuration parameters have new default values:

  • The innodb_file_format default value was changed to Barracuda. The previous default value was Antelope.

  • The innodb_large_prefix default value was changed to ON. The previous default was OFF.

The following file format configuration parameters are deprecated in and may be removed in a future release:

The file format configuration parameters were provided for creating tables compatible with earlier versions of InnoDB in MySQL 5.1. Now that MySQL 5.1 has reached the end of its product lifecycle, the parameters are no longer required.

15.10.1 Enabling File Formats

The innodb_file_format configuration option enables an InnoDB file format for file-per-table tablespaces.

Barracuda is the default innodb_file_format setting. In earlier releases, the default file format was Antelope.

Note

The innodb_file_format configuration option is deprecated and may be removed in a future release. For more information, see Section 15.10, “InnoDB File-Format Management”.

You can set the value of innodb_file_format on the command line when you start mysqld, or in the option file (my.cnf on Unix, my.ini on Windows). You can also change it dynamically with a SET GLOBAL statement.

SET GLOBAL innodb_file_format=Barracuda;

Usage notes

15.10.2 Verifying File Format Compatibility

InnoDB incorporates several checks to guard against the possible crashes and data corruptions that might occur if you run an old release of the MySQL server on InnoDB data files that use a newer file format. These checks take place when the server is started, and when you first access a table. This section describes these checks, how you can control them, and error and warning conditions that might arise.

Backward Compatibility

You only need to consider backward file format compatibility when using a recent version of InnoDB (MySQL 5.5 and higher with InnoDB) alongside an older version (MySQL 5.1 or earlier, with the built-in InnoDB rather than the InnoDB Plugin). To minimize the chance of compatibility issues, you can standardize on the InnoDB Plugin for all your MySQL 5.1 and earlier database servers.

In general, a newer version of InnoDB may create a table or index that cannot safely be read or written with an older version of InnoDB without risk of crashes, hangs, wrong results or corruptions. InnoDB includes a mechanism to guard against these conditions, and to help preserve compatibility among database files and versions of InnoDB. This mechanism lets you take advantage of some new features of an InnoDB release (such as performance improvements and bug fixes), and still preserve the option of using your database with an old version of InnoDB, by preventing accidental use of new features that create downward-incompatible disk files.

If a version of InnoDB supports a particular file format (whether or not that format is the default), you can query and update any table that requires that format or an earlier format. Only the creation of new tables using new features is limited based on the particular file format enabled. Conversely, if a tablespace contains a table or index that uses a file format that is not supported, it cannot be accessed at all, even for read access.

The only way to downgrade an InnoDB tablespace to the earlier Antelope file format is to copy the data to a new table, in a tablespace that uses the earlier format.

The easiest way to determine the file format of an existing InnoDB tablespace is to examine the properties of the table it contains, using the SHOW TABLE STATUS command or querying the table INFORMATION_SCHEMA.TABLES. If the Row_format of the table is reported as 'Compressed' or 'Dynamic', the tablespace containing the table supports the Barracuda format.

Internal Details

Every InnoDB file-per-table tablespace (represented by a *.ibd file) file is labeled with a file format identifier. The system tablespace (represented by the ibdata files) is tagged with the highest file format in use in a group of InnoDB database files, and this tag is checked when the files are opened.

Creating a compressed table, or a table with ROW_FORMAT=DYNAMIC, updates the file header of the corresponding file-per-table .ibd file and the table type in the InnoDB data dictionary with the identifier for the Barracuda file format. From that point forward, the table cannot be used with a version of InnoDB that does not support the Barracuda file format. To protect against anomalous behavior, InnoDB performs a compatibility check when the table is opened. (In many cases, the ALTER TABLE statement recreates a table and thus changes its properties. The special case of adding or dropping indexes without rebuilding the table is described in Section 15.13.1, “Overview of Online DDL”.)

General tablespaces, which are also represented by a *.ibd file, support both Antelope and Barracuda file formats. For more information about general tablespaces, see Section 15.7.9, “InnoDB General Tablespaces”.

Definition of ib-file set

To avoid confusion, for the purposes of this discussion we define the term ib-file set to mean the set of operating system files that InnoDB manages as a unit. The ib-file set includes the following files:

  • The system tablespace (one or more ibdata files) that contain internal system information (including internal catalogs and undo information) and may include user data and indexes.

  • Zero or more single-table tablespaces (also called file per table files, named *.ibd files).

  • InnoDB log files; usually two, ib_logfile0 and ib_logfile1. Used for crash recovery and in backups.

An ib-file set does not include the corresponding .frm files that contain metadata about InnoDB tables. The .frm files are created and managed by MySQL, and can sometimes get out of sync with the internal metadata in InnoDB.

Multiple tables, even from more than one database, can be stored in a single ib-file set. (In MySQL, a database is a logical collection of tables, what other systems refer to as a schema or catalog.)

15.10.2.1 Compatibility Check When InnoDB Is Started

To prevent possible crashes or data corruptions when InnoDB opens an ib-file set, it checks that it can fully support the file formats in use within the ib-file set. If the system is restarted following a crash, or a fast shutdown (i.e., innodb_fast_shutdown is greater than zero), there may be on-disk data structures (such as redo or undo entries, or doublewrite pages) that are in a too-new format for the current software. During the recovery process, serious damage can be done to your data files if these data structures are accessed. The startup check of the file format occurs before any recovery process begins, thereby preventing consistency issues with the new tables or startup problems for the MySQL server.

Beginning with version InnoDB 1.0.1, the system tablespace records an identifier or tag for the highest file format used by any table in any of the tablespaces that is part of the ib-file set. Checks against this file format tag are controlled by the configuration parameter innodb_file_format_check, which is ON by default.

If the file format tag in the system tablespace is newer or higher than the highest version supported by the particular currently executing software and if innodb_file_format_check is ON, the following error is issued when the server is started:

InnoDB: Error: the system tablespace is in a
file format that this version doesn't support

You can also set innodb_file_format to a file format name. Doing so prevents InnoDB from starting if the current software does not support the file format specified. It also sets the high water mark to the value you specify. The ability to set innodb_file_format_check is useful (with future releases) if you manually downgrade all of the tables in an ib-file set. You can then rely on the file format check at startup if you subsequently use an older version of InnoDB to access the ib-file set.

In some limited circumstances, you might want to start the server and use an ib-file set that is in a new file format that is not supported by the software you are using. If you set the configuration parameter innodb_file_format_check to OFF, InnoDB opens the database, but issues this warning message in the error log:

InnoDB: Warning: the system tablespace is in a
file format that this version doesn't support
Note

This is a dangerous setting, as it permits the recovery process to run, possibly corrupting your database if the previous shutdown was a crash or fast shutdown. You should only set innodb_file_format_check to OFF if you are sure that the previous shutdown was done with innodb_fast_shutdown=0, so that essentially no recovery process occurs.

The parameter innodb_file_format_check affects only what happens when a database is opened, not subsequently. Conversely, the parameter innodb_file_format (which enables a specific format) only determines whether or not a new table can be created in the enabled format and has no effect on whether or not a database can be opened.

The file format tag is a high water mark, and as such it is increased after the server is started, if a table in a higher format is created or an existing table is accessed for read or write (assuming its format is supported). If you access an existing table in a format higher than the format the running software supports, the system tablespace tag is not updated, but table-level compatibility checking applies (and an error is issued), as described in Section 15.10.2.2, “Compatibility Check When a Table Is Opened”. Any time the high water mark is updated, the value of innodb_file_format_check is updated as well, so the command SELECT @@innodb_file_format_check; displays the name of the latest file format known to be used by tables in the currently open ib-file set and supported by the currently executing software.

15.10.2.2 Compatibility Check When a Table Is Opened

When a table is first accessed, InnoDB (including some releases prior to InnoDB 1.0) checks that the file format of the tablespace in which the table is stored is fully supported. This check prevents crashes or corruptions that would otherwise occur when tables using a too new data structure are encountered.

All tables using any file format supported by a release can be read or written (assuming the user has sufficient privileges). The setting of the system configuration parameter innodb_file_format can prevent creating a new table that uses a specific file format, even if the file format is supported by a given release. Such a setting might be used to p