- Docs Home
- About TiDB Cloud
- Get Started
- Develop Applications
- Overview
- Quick Start
- Build a TiDB Developer Cluster
- CRUD SQL in TiDB
- Build a Simple CRUD App with TiDB
- Example Applications
- Connect to TiDB
- Design Database Schema
- Write Data
- Read Data
- Transaction
- Optimize
- Troubleshoot
- Reference
- Cloud Native Development Environment
- Manage Cluster
- Plan Your Cluster
- Create a TiDB Cluster
- Connect to Your TiDB Cluster
- Set Up VPC Peering Connections
- Use an HTAP Cluster with TiFlash
- Scale a TiDB Cluster
- Upgrade a TiDB Cluster
- Delete a TiDB Cluster
- Use TiDB Cloud API (Beta)
- Migrate Data
- Import Sample Data
- Migrate Data into TiDB
- Configure Amazon S3 Access and GCS Access
- Migrate from MySQL-Compatible Databases
- Migrate Incremental Data from MySQL-Compatible Databases
- Migrate from Amazon Aurora MySQL in Bulk
- Import or Migrate from Amazon S3 or GCS to TiDB Cloud
- Import CSV Files from Amazon S3 or GCS into TiDB Cloud
- Import Apache Parquet Files from Amazon S3 or GCS into TiDB Cloud
- Troubleshoot Access Denied Errors during Data Import from Amazon S3
- Export Data from TiDB
- Back Up and Restore
- Monitor and Alert
- Overview
- Built-in Monitoring
- Built-in Alerting
- Third-Party Monitoring Integrations
- Tune Performance
- Overview
- Analyze Performance
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- TiKV Follower Read
- Coprocessor Cache
- Garbage Collection (GC)
- Tune TiFlash performance
- Manage User Access
- Billing
- Reference
- TiDB Cluster Architecture
- TiDB Cloud Cluster Limits and Quotas
- TiDB Limitations
- SQL
- Explore SQL with TiDB
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMNADD INDEXADMINADMIN CANCEL DDLADMIN CHECKSUM TABLEADMIN CHECK [TABLE|INDEX]ADMIN SHOW DDL [JOBS|QUERIES]ALTER DATABASEALTER INDEXALTER TABLEALTER TABLE COMPACTALTER USERANALYZE TABLEBATCHBEGINCHANGE COLUMNCOMMITCHANGE DRAINERCHANGE PUMPCREATE [GLOBAL|SESSION] BINDINGCREATE DATABASECREATE INDEXCREATE ROLECREATE SEQUENCECREATE TABLE LIKECREATE TABLECREATE USERCREATE VIEWDEALLOCATEDELETEDESCDESCRIBEDODROP [GLOBAL|SESSION] BINDINGDROP COLUMNDROP DATABASEDROP INDEXDROP ROLEDROP SEQUENCEDROP STATSDROP TABLEDROP USERDROP VIEWEXECUTEEXPLAIN ANALYZEEXPLAINFLASHBACK TABLEFLUSH PRIVILEGESFLUSH STATUSFLUSH TABLESGRANT <privileges>GRANT <role>INSERTKILL [TIDB]MODIFY COLUMNPREPARERECOVER TABLERENAME INDEXRENAME TABLEREPLACEREVOKE <privileges>REVOKE <role>ROLLBACKSELECTSET DEFAULT ROLESET [NAMES|CHARACTER SET]SET PASSWORDSET ROLESET TRANSACTIONSET [GLOBAL|SESSION] <variable>SHOW ANALYZE STATUSSHOW [GLOBAL|SESSION] BINDINGSSHOW BUILTINSSHOW CHARACTER SETSHOW COLLATIONSHOW [FULL] COLUMNS FROMSHOW CREATE SEQUENCESHOW CREATE TABLESHOW CREATE USERSHOW DATABASESSHOW DRAINER STATUSSHOW ENGINESSHOW ERRORSSHOW [FULL] FIELDS FROMSHOW GRANTSSHOW INDEX [FROM|IN]SHOW INDEXES [FROM|IN]SHOW KEYS [FROM|IN]SHOW MASTER STATUSSHOW PLUGINSSHOW PRIVILEGESSHOW [FULL] PROCESSSLISTSHOW PROFILESSHOW PUMP STATUSSHOW SCHEMASSHOW STATS_HEALTHYSHOW STATS_HISTOGRAMSSHOW STATS_METASHOW STATUSSHOW TABLE NEXT_ROW_IDSHOW TABLE REGIONSSHOW TABLE STATUSSHOW [FULL] TABLESSHOW [GLOBAL|SESSION] VARIABLESSHOW WARNINGSSHUTDOWNSPLIT REGIONSTART TRANSACTIONTABLETRACETRUNCATEUPDATEUSEWITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Locking Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Table Attributes
- Transactions
- Views
- Partitioning
- Temporary Tables
- Cached Tables
- Character Set and Collation
- Read Historical Data
- System Tables
mysql- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUSCLIENT_ERRORS_SUMMARY_BY_HOSTCLIENT_ERRORS_SUMMARY_BY_USERCLIENT_ERRORS_SUMMARY_GLOBALCHARACTER_SETSCLUSTER_INFOCOLLATIONSCOLLATION_CHARACTER_SET_APPLICABILITYCOLUMNSDATA_LOCK_WAITSDDL_JOBSDEADLOCKSENGINESKEY_COLUMN_USAGEPARTITIONSPROCESSLISTREFERENTIAL_CONSTRAINTSSCHEMATASEQUENCESSESSION_VARIABLESSLOW_QUERYSTATISTICSTABLESTABLE_CONSTRAINTSTABLE_STORAGE_STATSTIDB_HOT_REGIONS_HISTORYTIDB_INDEXESTIDB_SERVERS_INFOTIDB_TRXTIFLASH_REPLICATIKV_REGION_PEERSTIKV_REGION_STATUSTIKV_STORE_STATUSUSER_PRIVILEGESVIEWS
- System Variables
- API Reference
- Storage Engines
- Dumpling
- Table Filter
- Troubleshoot Inconsistency Between Data and Indexes
- FAQs
- Release Notes
- Support
- Glossary
Insert Data
This document describes how to insert data into TiDB by using the SQL language with different programming languages.
Before you start
Before reading this document, you need to prepare the following:
- Build a TiDB Cluster in TiDB Cloud (Developer Tier).
- Read Schema Design Overview, Create a Database, Create a Table, and Create Secondary Indexes
Insert rows
There are two ways to insert multiple rows of data. For example, if you need to insert 3 players' data.
A multi-line insertion statement:
INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (1, 1000, 1), (2, 230, 2), (3, 300, 5);Multiple single-line insertion statements:
INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (1, 1000, 1); INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (2, 230, 2); INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (3, 300, 5);
Generally the multi-line insertion statement runs faster than the multiple single-line insertion statements.
- SQL
- Java
- Golang
CREATE TABLE `player` (`id` INT, `coins` INT, `goods` INT);
INSERT INTO `player` (`id`, `coins`, `goods`) VALUES (1, 1000, 1), (2, 230, 2);
For more information on how to use this SQL, see Connecting to a TiDB Cluster and follow the steps to enter the SQL statement after connecting to a TiDB cluster using a client.
// ds is an entity of com.mysql.cj.jdbc.MysqlDataSource
try (Connection connection = ds.getConnection()) {
connection.setAutoCommit(false);
PreparedStatement pstmt = connection.prepareStatement("INSERT INTO player (id, coins, goods) VALUES (?, ?, ?)"))
// first player
pstmt.setInt(1, 1);
pstmt.setInt(2, 1000);
pstmt.setInt(3, 1);
pstmt.addBatch();
// second player
pstmt.setInt(1, 2);
pstmt.setInt(2, 230);
pstmt.setInt(3, 2);
pstmt.addBatch();
pstmt.executeBatch();
connection.commit();
} catch (SQLException e) {
e.printStackTrace();
}
Due to the default MySQL JDBC Driver settings, you need to change some parameters to get better bulk insert performance.
| Parameter | Means | Recommended Scenario | Recommended Configuration |
|---|---|---|---|
useServerPrepStmts | Whether to use the server side to enable prepared statements | When you need to use a prepared statement more than once | true |
cachePrepStmts | Whether the client caches prepared statements | useServerPrepStmts=true 时 | true |
prepStmtCacheSqlLimit | Maximum size of a prepared statement (256 characters by default) | When the prepared statement is greater than 256 characters | Configured according to the actual size of the prepared statement |
prepStmtCacheSize | Maximum number of prepared statement caches (25 by default) | When the number of prepared statements is greater than 25 | Configured according to the actual number of prepared statements |
rewriteBatchedStatements | Whether to rewrite Batched statements | When batch operations are required | true |
allowMultiQueries | Start batch operations | Because a client bug requires this to be set when rewriteBatchedStatements = true and useServerPrepStmts = true | true |
MySQL JDBC Driver also provides an integrated configuration: useConfigs. When it is configured with maxPerformance, it is equivalent to configuring a set of configurations. Taking mysql:mysql-connector-java:8.0.28 as an example, useConfigs=maxPerformance contains:
cachePrepStmts=true
cacheCallableStmts=true
cacheServerConfiguration=true
useLocalSessionState=true
elideSetAutoCommits=true
alwaysSendSetIsolation=false
enableQueryTimeouts=false
connectionAttributes=none
useInformationSchema=true
You can check mysql-connector-java-{version}.jar!/com/mysql/cj/configurations/maxPerformance.properties to get the configurations contained in useConfigs=maxPerformance for the corresponding version of MySQL JDBC Driver.
The following is a typical scenario of JDBC connection string configurations. In this example, Host: 127.0.0.1, Port: 4000, User name: root, Password: null, Default database: test:
jdbc:mysql://127.0.0.1:4000/test?user=root&useConfigs=maxPerformance&useServerPrepStmts=true&prepStmtCacheSqlLimit=2048&prepStmtCacheSize=256&rewriteBatchedStatements=true&allowMultiQueries=true
For a complete example in Java, see:
package main
import (
"database/sql"
"strings"
_ "github.com/go-sql-driver/mysql"
)
type Player struct {
ID string
Coins int
Goods int
}
func bulkInsertPlayers(db *sql.DB, players []Player, batchSize int) error {
tx, err := db.Begin()
if err != nil {
return err
}
stmt, err := tx.Prepare(buildBulkInsertSQL(batchSize))
if err != nil {
return err
}
defer stmt.Close()
for len(players) > batchSize {
if _, err := stmt.Exec(playerToArgs(players[:batchSize])...); err != nil {
tx.Rollback()
return err
}
players = players[batchSize:]
}
if len(players) != 0 {
if _, err := tx.Exec(buildBulkInsertSQL(len(players)), playerToArgs(players)...); err != nil {
tx.Rollback()
return err
}
}
if err := tx.Commit(); err != nil {
tx.Rollback()
return err
}
return nil
}
func playerToArgs(players []Player) []interface{} {
var args []interface{}
for _, player := range players {
args = append(args, player.ID, player.Coins, player.Goods)
}
return args
}
func buildBulkInsertSQL(amount int) string {
return "INSERT INTO player (id, coins, goods) VALUES (?, ?, ?)" + strings.Repeat(",(?,?,?)", amount-1)
}
For a complete example in Golang, see:
Bulk-Insert
If you need to quickly import a large amount of data into a TiDB cluster, it is recommended that you use a range of tools provided by PingCAP for data migration. Using the INSERT statement is not the best way, because it is not efficient and requires to handle exceptions and other issues on your own.
The following are the recommended tools for bulk-insert:
- Data export: Dumpling. You can export MySQL or TiDB data to local or Amazon S3.
- Data import: TiDB Lightning. You can import Dumpling exported data, a CSV file, or Migrate Data from Amazon Aurora to TiDB. It also supports reading data from a local disk or Amazon S3 cloud disk.
- Data replication: TiDB Data Migration. You can replicate MySQL, MariaDB, and Amazon Aurora databases to TiDB. It also supports merging and migrating the sharded instances and tables from the source databases.
- Data backup and restore: Backup & Restore (BR). Compared to Dumpling, BR is more suitable for big data scenario.
- Data import: Data Import Task page in the TiDB Cloud console. You can import Dumpling exported data, a CSV file, or Migrate Data from Amazon Aurora to TiDB. It also supports reading data from a local disk, Amazon S3 cloud disk, or GCS cloud disk.
- Data replication: TiDB Data Migration. You can replicate MySQL, MariaDB, and Amazon Aurora databases to TiDB. It also supports merging and migrating the sharded instances and tables from the source databases.
- Data backup and restore: Backup page in the TiDB Cloud console. Compared to Dumpling, backup and restore is more suitable for big data scenario.
Avoid hotspots
When designing a table, you need to consider if there is a large number of insert operations. If so, you need to avoid hotspots during table design. See the Select primary key section and follow the Rules when selecting primary key.
For more information on how to handle hotspot issues, see Troubleshoot Hotspot Issues.
Insert data to a table with the AUTO_RANDOM primary key
If the primary key of the table you insert has the AUTO_RANDOM attribute, then by default the primary key cannot be specified. For example, in the bookshop database, you can see that the id field of the users table contains the AUTO_RANDOM attribute.
In this case, you cannot use SQL like the following to insert:
INSERT INTO `bookshop`.`users` (`id`, `balance`, `nickname`) VALUES (1, 0.00, 'nicky');
An error will occur:
ERROR 8216 (HY000): Invalid auto random: Explicit insertion on auto_random column is disabled. Try to set @@allow_auto_random_explicit_insert = true.
It is not recommended to manually specify the AUTO_RANDOM column during insertion time.
There are two solutions to handle this error:
(Recommended) Remove this column from the insert statement and use the
AUTO_RANDOMvalue that TiDB initialized for you. This fits the semantics ofAUTO_RANDOM.INSERT INTO `bookshop`.`users` (`balance`, `nickname`) VALUES (0.00, 'nicky');If you are sure that you must specify this column, then you can use the
SETstatement to allow the column ofAUTO_RANDOMto be specified during insertion time by changing the user variable.SET @@allow_auto_random_explicit_insert = true; INSERT INTO `bookshop`.`users` (`id`, `balance`, `nickname`) VALUES (1, 0.00, 'nicky');
Use HTAP
In TiDB, HTAP capabilities save you from performing additional operations when inserting data. There is no additional insertion logic. TiDB automatically guarantees data consistency. All you need to do is turn on column-oriented replica synchronization after creating the table, and use the column-oriented replica to speed up your queries directly.