- Docs Home
- About TiDB
- Quick Start
- Develop- Overview
- Quick Start- Build a TiDB Cluster in TiDB Cloud (Developer Tier)
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
 
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Third-party Support
 
- Deploy- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
 
- Migrate- Overview
- Migration Tools
- Migration Scenarios- Migrate from Aurora
- Migrate MySQL of Small Datasets
- Migrate MySQL of Large Datasets
- Migrate and Merge MySQL Shards of Small Datasets
- Migrate and Merge MySQL Shards of Large Datasets
- Migrate from CSV Files
- Migrate from SQL Files
- Migrate from One TiDB Cluster to Another TiDB Cluster
- Migrate from TiDB to MySQL-compatible Databases
 
- Advanced Migration
 
- Integrate
- Maintain
- Monitor and Alert
- Troubleshoot- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries Using Top SQL
- Identify Expensive Queries Using Logs
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Save and Restore the On-Site Information of a Cluster
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Troubleshoot Inconsistency Between Data and Indexes
 
- Performance Tuning- Tuning Guide
- Configuration Tuning- System Tuning
- Software Tuning
 
- SQL Tuning- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
 
- Control Execution Plans
 
 
- Tutorials
- TiDB Tools- Overview
- Use Cases
- Download
- TiUP- Documentation Map
- Overview
- Terminology and Concepts
- Manage TiUP Components
- FAQ
- Troubleshooting Guide
- Command Reference- Overview
- TiUP Commands
- TiUP Cluster Commands- Overview
- tiup cluster audit
- tiup cluster check
- tiup cluster clean
- tiup cluster deploy
- tiup cluster destroy
- tiup cluster disable
- tiup cluster display
- tiup cluster edit-config
- tiup cluster enable
- tiup cluster help
- tiup cluster import
- tiup cluster list
- tiup cluster patch
- tiup cluster prune
- tiup cluster reload
- tiup cluster rename
- tiup cluster replay
- tiup cluster restart
- tiup cluster scale-in
- tiup cluster scale-out
- tiup cluster start
- tiup cluster stop
- tiup cluster template
- tiup cluster upgrade
 
- TiUP DM Commands- Overview
- tiup dm audit
- tiup dm deploy
- tiup dm destroy
- tiup dm disable
- tiup dm display
- tiup dm edit-config
- tiup dm enable
- tiup dm help
- tiup dm import
- tiup dm list
- tiup dm patch
- tiup dm prune
- tiup dm reload
- tiup dm replay
- tiup dm restart
- tiup dm scale-in
- tiup dm scale-out
- tiup dm start
- tiup dm stop
- tiup dm template
- tiup dm upgrade
 
 
- TiDB Cluster Topology Reference
- DM Cluster Topology Reference
- Mirror Reference Guide
- TiUP Components
 
- PingCAP Clinic Diagnostic Service
- TiDB Operator
- Dumpling
- TiDB Lightning
- TiDB Data Migration- About TiDB Data Migration
- Architecture
- Quick Start
- Deploy a DM cluster
- Tutorials
- Advanced Tutorials
- Maintain- Cluster Upgrade
- Tools
- Performance Tuning
- Manage Data Sources
- Manage Tasks
- Export and Import Data Sources and Task Configurations of Clusters
- Handle Alerts
- Daily Check
 
- Reference- Architecture
- Command Line
- Configuration Files
- OpenAPI
- Compatibility Catalog
- Secure
- Monitoring and Alerts
- Error Codes
- Glossary
 
- Example
- Troubleshoot
- Release Notes
 
- Backup & Restore (BR)
- TiDB Binlog
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
 
- Reference- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL- SQL Language Structure and Syntax
- SQL Statements- ADD COLUMN
- ADD INDEX
- ADMIN
- ADMIN CANCEL DDL
- ADMIN CHECKSUM TABLE
- ADMIN CHECK [TABLE|INDEX]
- ADMIN SHOW DDL [JOBS|QUERIES]
- ADMIN SHOW TELEMETRY
- ALTER DATABASE
- ALTER INDEX
- ALTER INSTANCE
- ALTER PLACEMENT POLICY
- ALTER TABLE
- ALTER TABLE COMPACT
- ALTER USER
- ANALYZE TABLE
- BACKUP
- BATCH
- BEGIN
- CHANGE COLUMN
- COMMIT
- CHANGE DRAINER
- CHANGE PUMP
- CREATE [GLOBAL|SESSION] BINDING
- CREATE DATABASE
- CREATE INDEX
- CREATE PLACEMENT POLICY
- CREATE ROLE
- CREATE SEQUENCE
- CREATE TABLE LIKE
- CREATE TABLE
- CREATE USER
- CREATE VIEW
- DEALLOCATE
- DELETE
- DESC
- DESCRIBE
- DO
- DROP [GLOBAL|SESSION] BINDING
- DROP COLUMN
- DROP DATABASE
- DROP INDEX
- DROP PLACEMENT POLICY
- DROP ROLE
- DROP SEQUENCE
- DROP STATS
- DROP TABLE
- DROP USER
- DROP VIEW
- EXECUTE
- EXPLAIN ANALYZE
- EXPLAIN
- FLASHBACK TABLE
- FLUSH PRIVILEGES
- FLUSH STATUS
- FLUSH TABLES
- GRANT <privileges>
- GRANT <role>
- INSERT
- KILL [TIDB]
- LOAD DATA
- LOAD STATS
- MODIFY COLUMN
- PREPARE
- RECOVER TABLE
- RENAME INDEX
- RENAME TABLE
- REPLACE
- RESTORE
- REVOKE <privileges>
- REVOKE <role>
- ROLLBACK
- SELECT
- SET DEFAULT ROLE
- SET [NAMES|CHARACTER SET]
- SET PASSWORD
- SET ROLE
- SET TRANSACTION
- SET [GLOBAL|SESSION] <variable>
- SHOW ANALYZE STATUS
- SHOW [BACKUPS|RESTORES]
- SHOW [GLOBAL|SESSION] BINDINGS
- SHOW BUILTINS
- SHOW CHARACTER SET
- SHOW COLLATION
- SHOW [FULL] COLUMNS FROM
- SHOW CONFIG
- SHOW CREATE PLACEMENT POLICY
- SHOW CREATE SEQUENCE
- SHOW CREATE TABLE
- SHOW CREATE USER
- SHOW DATABASES
- SHOW DRAINER STATUS
- SHOW ENGINES
- SHOW ERRORS
- SHOW [FULL] FIELDS FROM
- SHOW GRANTS
- SHOW INDEX [FROM|IN]
- SHOW INDEXES [FROM|IN]
- SHOW KEYS [FROM|IN]
- SHOW MASTER STATUS
- SHOW PLACEMENT
- SHOW PLACEMENT FOR
- SHOW PLACEMENT LABELS
- SHOW PLUGINS
- SHOW PRIVILEGES
- SHOW [FULL] PROCESSSLIST
- SHOW PROFILES
- SHOW PUMP STATUS
- SHOW SCHEMAS
- SHOW STATS_HEALTHY
- SHOW STATS_HISTOGRAMS
- SHOW STATS_META
- SHOW STATUS
- SHOW TABLE NEXT_ROW_ID
- SHOW TABLE REGIONS
- SHOW TABLE STATUS
- SHOW [FULL] TABLES
- SHOW [GLOBAL|SESSION] VARIABLES
- SHOW WARNINGS
- SHUTDOWN
- SPLIT REGION
- START TRANSACTION
- TABLE
- TRACE
- TRUNCATE
- UPDATE
- USE
- WITH
 
- Data Types
- Functions and Operators- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
 
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Placement Rules in SQL
- System Tables- mysql
- INFORMATION_SCHEMA- Overview
- ANALYZE_STATUS
- CLIENT_ERRORS_SUMMARY_BY_HOST
- CLIENT_ERRORS_SUMMARY_BY_USER
- CLIENT_ERRORS_SUMMARY_GLOBAL
- CHARACTER_SETS
- CLUSTER_CONFIG
- CLUSTER_HARDWARE
- CLUSTER_INFO
- CLUSTER_LOAD
- CLUSTER_LOG
- CLUSTER_SYSTEMINFO
- COLLATIONS
- COLLATION_CHARACTER_SET_APPLICABILITY
- COLUMNS
- DATA_LOCK_WAITS
- DDL_JOBS
- DEADLOCKS
- ENGINES
- INSPECTION_RESULT
- INSPECTION_RULES
- INSPECTION_SUMMARY
- KEY_COLUMN_USAGE
- METRICS_SUMMARY
- METRICS_TABLES
- PARTITIONS
- PLACEMENT_POLICIES
- PROCESSLIST
- REFERENTIAL_CONSTRAINTS
- SCHEMATA
- SEQUENCES
- SESSION_VARIABLES
- SLOW_QUERY
- STATISTICS
- TABLES
- TABLE_CONSTRAINTS
- TABLE_STORAGE_STATS
- TIDB_HOT_REGIONS
- TIDB_HOT_REGIONS_HISTORY
- TIDB_INDEXES
- TIDB_SERVERS_INFO
- TIDB_TRX
- TIFLASH_REPLICA
- TIKV_REGION_PEERS
- TIKV_REGION_STATUS
- TIKV_STORE_STATUS
- USER_PRIVILEGES
- VIEWS
 
- METRICS_SCHEMA
 
 
- UI- TiDB Dashboard- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Top SQL Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Instance Profiling
- Session Management and Configuration
- FAQ
 
 
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
 
- FAQs
- Release Notes- All Releases
- Release Timeline
- TiDB Versioning
- v6.1
- v6.0
- v5.4
- v5.3
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
 
- Glossary
Troubleshoot High Disk I/O Usage in TiDB
This document introduces how to locate and address the issue of high disk I/O usage in TiDB.
Check the current I/O metrics
If TiDB's response slows down after you have troubleshot the CPU bottleneck and the bottleneck caused by transaction conflicts, you need to check I/O metrics to help determine the current system bottleneck.
Locate I/O issues from monitor
The quickest way to locate I/O issues is to view the overall I/O status from the monitor, such as the Grafana dashboard which is deployed by default by TiUP. The dashboard panels related to I/O include Overview, Node_exporter, and Disk-Performance.
The first type of monitoring panels
In Overview> System Info> IO Util, you can see the I/O status of each machine in the cluster. This metric is similar to util in the Linux iostat monitor. The higher percentage represents higher disk I/O usage:
- If there is only one machine with high I/O usage in the monitor, currently there might be read and write hotspots on this machine.
- If the I/O usage of most machines in the monitor is high, the cluster now has high I/O loads.
For the first situation above (only one machine with high I/O usage), you can further observe I/O metrics from the Disk-Performance Dashboard such as Disk Latency and Disk Load to determine whether any anomaly exists. If necessary, use the fio tool to check the disk.
The second type of monitoring panels
The main storage component of the TiDB cluster is TiKV. One TiKV instance contains two RocksDB instances: one for storing Raft logs, located in data/raft, and the other for storing real data, located in data/db.
In TiKV-Details > Raft IO, you can see the metrics related to disk writes of these two instances:
- Append log duration: This metric indicates the response time of writes into RockDB that stores Raft logs. The- .99response time should be within 50 ms.
- Apply log duration: This metric indicates the response time of writes into RockDB that stores real data. The- .99response should be within 100 ms.
These two metrics also have the .. per server monitoring panel to help you view the write hotspots.
The third type of monitoring panels
In TiKV-Details > Storage, there are monitoring metrics related to storage:
- Storage command total: Indicates the number of different commands received.
- Storage async write duration: Includes monitoring metrics such as- disk sync duration, which might be related to Raft I/O. If you encounter an abnormal situation, check the working statuses of related components by checking logs.
Other panels
In addition, some other panel metrics might help you determine whether the bottleneck is I/O, and you can try to set some parameters. By checking the prewrite/commit/raw-put (for raw key-value clusters only) of TiKV gRPC duration, you can determine that the bottleneck is indeed the slow TiKV write. The common situations of slow TiKV writes are as follows:
- append logis slow. TiKV Grafana's- Raft I/Oand- append log durationmetrics are relatively high, which is often due to slow disk writes. You can check the value of- WAL Sync Duration maxin RocksDB-raft to determine the cause of slow- append log. Otherwise, you might need to report a bug.
- The - raftstorethread is busy. In TiKV Grafana,- Raft Propose/- propose wait durationis significantly higher than- append log duration. Check the following aspects for troubleshooting:- Whether the value of store-pool-sizeof[raftstore]is too small. It is recommended to set this value between[1,5]and not too large.
- Whether the CPU resource of the machine is insufficient.
 
- Whether the value of 
- append logis slow. TiKV Grafana's- Raft I/Oand- append log durationmetrics are relatively high, which might usually occur along with relatively high- Raft Propose/- apply wait duration. The possible causes are as follows:- The value of apply-pool-sizeof[raftstore]is too small. It is recommended to set this value between[1, 5]and not too large. The value ofThread CPU/apply cpuis also relatively high.
- Insufficient CPU resources on the machine.
- Write hotspot issue of a single Region (Currently, the solution to this issue is still on the way). The CPU usage of a single applythread is high (which can be viewed by modifying the Grafana expression, appended withby (instance, name)).
- Slow write into RocksDB, and RocksDB kv/max write durationis high. A single Raft log might contain multiple key-value pairs (kv). 128 kvs are written to RocksDB in a batch, so oneapplylog might involve multiple RocksDB writes.
- For other causes, report them as bugs.
 
- The value of 
- raft commit logis slow. In TiKV Grafana,- Raft I/Oand- commit log duration(only available in Grafana 4.x) metrics are relatively high. Each Region corresponds to an independent Raft group. Raft has a flow control mechanism similar to the sliding window mechanism of TCP. To control the size of a sliding window, adjust the- [raftstore] raft-max-inflight-msgsparameter. If there is a write hotspot and- commit log durationis high, you can properly set this parameter to a larger value, such as- 1024.
Locate I/O issues from log
- If the client reports errors such as - server is busyor especially- raftstore is busy, the errors might be related to I/O issues.- You can check the monitoring panel (Grafana -> TiKV -> errors) to confirm the specific cause of the - busyerror.- server is busyis TiKV's flow control mechanism. In this way, TiKV informs- tidb/ti-clientthat the current pressure of TiKV is too high, and the client should try later.
- Write stallappears in TiKV RocksDB logs.- It might be that too many level-0 SST files cause the write stall. To address the issue, you can add the - [rocksdb] max-sub-compactions = 2 (or 3)parameter to speed up the compaction of level-0 SST files. This parameter means that the compaction tasks of level-0 to level-1 can be divided into- max-sub-compactionssubtasks for multi-threaded concurrent execution.- If the disk's I/O capability fails to keep up with the write, it is recommended to scale up the disk. If the throughput of the disk reaches the upper limit (for example, the throughput of SATA SSD is much lower than that of NVMe SSD), which results in write stall, but the CPU resource is relatively sufficient, you can try to use a compression algorithm of higher compression ratio to relieve the pressure on the disk, that is, use CPU resources to make up for disk resources. - For example, when the pressure of - default cf compactionis relatively high, you can change the parameter- [rocksdb.defaultcf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd" , "zstd"]to- compression-per-level = ["no", "no", "zstd", "zstd", "zstd", "zstd", "zstd"].
I/O issues found in alerts
The cluster deployment tool (TiUP) deploys the cluster with alert components by default that have built-in alert items and thresholds. The following alert items are related to I/O:
- TiKV_write_stall
- TiKV_raft_log_lag
- TiKV_async_request_snapshot_duration_seconds
- TiKV_async_request_write_duration_seconds
- TiKV_raft_append_log_duration_secs
- TiKV_raft_apply_log_duration_secs
Handle I/O issues
- When an I/O hotspot issue is confirmed to occur, you need to refer to Handle TiDB Hotspot Issues to eliminate the I/O hotspots.
- When it is confirmed that the overall I/O performance has become the bottleneck, and you can determine that the I/O performance will keep falling behind in the application side, then you can take advantage of the distributed database's capability of scaling and increase the number of TiKV nodes to have greater overall I/O throughput.
- Adjust some of the parameters as described above, and use computing/memory resources to make up for disk storage resources.