Create a database first
CREATE TABLE user ( id varchar(32) CHARACTER SET utf8 NOT NULL, name varchar(50) CHARACTER SET utf8 DEFAULT NULL , dflag char(1) CHARACTER SET utf8 DEFAULT NULL , PRIMARY KEY (`id`) )
jdbc.properties configuration
mysql.driver=com.mysql.jdbc.Driver mysql.url=jdbc:mysql://127.0.0.1:3306/qingmu?characterEncoding=utf-8 mysql.username=root mysql.password=admin #Define initial connections mysql.initialSize=1 #Define maximum connections mysql.maxActive=20 #Define maximum idle mysql.maxIdle=20 #Define minimum idle mysql.minIdle=1 #Define the maximum waiting time mysql.maxWait=60000
Configuration file of sqlMapperConfig:
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN" "http://mybatis.org/dtd/mybatis-3-config.dtd"> <configuration> <properties resource="db.properties"></properties> <!-- Automatic scanning pojo All classes under package--> <typeAliases> <package name="com.qingmu.pojo"></package> </typeAliases> <environments default="development"> <environment id="development"> <transactionManager type="JDBC"/> <dataSource type="POOLED"> <property name="driver" value="${jdbc.driver}"/> <property name="url" value="${jdbc.url}"/> <property name="username" value="${jdbc.user}"/> <property name="password" value="${jdbc.password}"/> </dataSource> </environment> </environments> <!--Batch scan registration--> <mappers> <mapper resource="UserMapper.xml"></mapper> </mappers> </configuration>
The first: ordinary for loop
That is to say, if there are 100 pieces of data to be inserted into the database, you can use the for loop directly
No other configuration files need to be changed, just
@Test //for Loop insert large amount of data public void insertBatch() throws IOException { InputStream resourceAsStream = Resources.getResourceAsStream("SqlMapConfig.xml"); SqlSessionFactoryBuilder sqlSessionFactoryBuilder = new SqlSessionFactoryBuilder(); SqlSessionFactory sqlSessionFactory = sqlSessionFactoryBuilder.build(resourceAsStream); SqlSession sqlSession = sqlSessionFactory.openSession(); UserMapper mapper = sqlSession.getMapper(UserMapper.class); User user=null; for (int i = 9; i < 19; i++) { user = new User(); user.setUsername("Guan Yu"); user.setSex("male"); user.setBirthday(new Date()); user.setAddress("Mulberry tree"); user.setId(i); mapper.insertUser(user); sqlSession.commit(); } }
The second is to add a parameter for openSession:
@Test public void testInsertBatch2() throws Exception { long start = System.currentTimeMillis(); User user; SqlSession sqlSession = sqlSessionTemplate.getSqlSessionFactory().openSession(ExecutorType.BATCH, false);//With the above sql Difference UserDao mapper = sqlSession.getMapper(UserDao.class); for (int i = 0; i < 500; i++) { user = new User(); user.setId("test" + i); user.setName("name" + i); user.setDelFlag("0"); mapper.insert(user); } sqlSession.commit(); long end = System.currentTimeMillis(); System.out.println("---------------" + (start - end) + "---------------"); }
mapper's mapping file does not need to be changed
The third is to use the foreach tag
<insert id="insertBatch"> INSERT INTO t_user (id, name, dflag) VALUES <foreach collection ="list" item="user" separator =","> (#{user.id}, #{user.name}, #{user.dFlag}) </foreach > </insert>
@Test public void testInsertBatch() throws Exception { List<User> list = new ArrayList<>(); User user; for (int i = 0; i < 10000; i++) { user = new User(); user.setId("test" + i); user.setName("name" + i); user.setDelFlag("0"); list.add(user); } userService.insertBatch(list); }
Special note: the default size of my sql to accept sql is 1048576(1M), that is, in the third way, if the data volume exceeds 1M, the following exception will be reported: (you can adjust the "max_allowed_packet = 1M" in the [mysqld] section of my.ini file under the MySQL installation directory.)
nested exception is com.mysql.jdbc.PacketTooBigException: Packet for query is too large (5677854 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
Conclusion: the batch import using dynamic sql takes the least time and has high efficiency. The other two methods have poor performance when the data volume is large