cache
1. What is Cache?
There is temporary data in memory.
Put the data frequently queried by users in the cache (memory), and users do not need to query from the disk (relational database data file) but from the cache to query the data, so as to improve the query efficiency and solve the performance problem of high concurrency system.
2. Why cache?
Reduce the number of interactions with the database, reduce system overhead and improve system efficiency.
3. What kind of data can be cached?
Frequently queried and infrequently changed data.
MyBatis has built-in a powerful transactional query caching mechanism, which can be easily configured and customized. To make it more powerful and easy to configure, we have made many improvements to the cache implementation in MyBatis 3.
L1 cache
L1 cache is also called local cache:
The data queried during the same session with the database will be placed in the local cache.
In the future, if you need to obtain the same data, you can get it directly from the cache. You don't have to query the database again;
By default, only local session caching is enabled, that is, L1 caching, which only caches the data in one session.
Four situations of L1 cache invalidation
The first level cache is a SqlSession level cache, which is always on, and we can't close it;
L1 cache invalidation: the current L1 cache is not used. The effect is that you need to send another query request to the database!
- sqlSession is different
@Test public void testQueryUserById(){ SqlSession session = MybatisUtils.getSession(); SqlSession session2 = MybatisUtils.getSession(); UserMapper mapper = session.getMapper(UserMapper.class); UserMapper mapper2 = session2.getMapper(UserMapper.class); User user = mapper.queryUserById(1); System.out.println(user); User user2 = mapper2.queryUserById(1); System.out.println(user2); System.out.println(user==user2); session.close(); session2.close(); }
Observation: two SQL statements were sent!
Conclusion: the caches in each sqlSession are independent of each other
- The sqlSession is the same, but the query criteria are different
@Test public void testQueryUserById(){ SqlSession session = MybatisUtils.getSession(); UserMapper mapper = session.getMapper(UserMapper.class); UserMapper mapper2 = session.getMapper(UserMapper.class); User user = mapper.queryUserById(1); System.out.println(user); User user2 = mapper2.queryUserById(2); System.out.println(user2); System.out.println(user==user2); session.close(); }
Observation: two SQL statements were sent! Very normal understanding
Conclusion: this data does not exist in the current cache
- sqlSession is the same. Add, delete and modify operations are performed between the two queries!
Increase method
//Modify user int updateUser(Map map);
Write SQL
<update id="updateUser" parameterType="map"> update user set name = #{name} where id = #{id} </update>
test
@Test public void testQueryUserById(){ SqlSession session = MybatisUtils.getSession(); UserMapper mapper = session.getMapper(UserMapper.class); User user = mapper.queryUserById(1); System.out.println(user); HashMap map = new HashMap(); map.put("name","kuangshen"); map.put("id",4); mapper.updateUser(map); User user2 = mapper.queryUserById(1); System.out.println(user2); System.out.println(user==user2); session.close(); }
Observation result: the query is re executed after adding, deleting and modifying in the middle
Conclusion: addition, deletion and modification may affect the current data
- The sqlSession is the same. Manually clear the L1 cache
@Test public void testQueryUserById(){ SqlSession session = MybatisUtils.getSession(); UserMapper mapper = session.getMapper(UserMapper.class); User user = mapper.queryUserById(1); System.out.println(user); session.clearCache();//Manually clear cache User user2 = mapper.queryUserById(1); System.out.println(user2); System.out.println(user==user2); session.close(); }
The first level cache is a map
L2 cache
-
L2 cache is also called global cache. The scope of L1 cache is too low, so L2 cache was born
-
Based on the namespace level cache, a namespace corresponds to a L2 cache;
-
Working mechanism
-
When a session queries a piece of data, the data will be placed in the first level cache of the current session;
-
If the current session is closed, the L1 cache corresponding to the session is gone; But what we want is that the session is closed and the data in the L1 cache is saved to the L2 cache;
-
The new session query information can get the content from the L2 cache;
-
The data found by different mapper s will be placed in their corresponding cache (map);
-
Enable L2 cache
1. Open global cache [mybatis config. XML]
<setting name="cacheEnabled" value="true"/>
2. Go to each mapper XML is configured to use L2 cache, which is very simple; [xxxMapper.xml]
<cache/>
Basically. The effect of this simple statement is as follows:
- The results of all select statements in the mapping statement file will be cached.
- All insert, update, and delete statements in the mapping statement file flush the cache.
- The cache will use the least recently used (LRU) algorithm to clear the unnecessary cache.
- The cache does not refresh regularly (that is, there is no refresh interval).
- The cache holds 1024 references to a list or object, regardless of what the query method returns.
- The cache is treated as a read / write cache, which means that the obtained objects are not shared and can be safely modified by the caller without interfering with potential modifications made by other callers or threads.
Tip: the cache only works on the statements in the mapping file where the cache tag is located. If you mix Java API and XML mapping files, the statements in the common interface will not be cached by default. You need to use the @ CacheNamespaceRef annotation to specify the cache scope.
These attributes can be modified through the attributes of the cache element. For example:
<cache eviction="FIFO" flushInterval="60000" size="512" readOnly="true"/>
This more advanced configuration creates a FIFO cache, which is refreshed every 60 seconds. It can store up to 512 references of the result object or list, and the returned objects are considered read-only. Therefore, modifying them may conflict with callers in different threads.
Available purge strategies are:
- LRU – least recently used: removes objects that have not been used for the longest time.
- FIFO – first in first out: remove objects in the order they enter the cache.
- SOFT – SOFT reference: removes objects based on garbage collector status and SOFT reference rules.
- WEAK – WEAK references: remove objects more actively based on garbage collector status and WEAK reference rules.
- The default purge policy is LRU.
The flush interval property can be set to any positive integer. The set value should be a reasonable amount of time in milliseconds. By default, it is not set, that is, there is no refresh interval, and the cache will only be refreshed when the statement is called.
The size (number of references) property can be set to any positive integer. Pay attention to the size of the object to be cached and the memory resources available in the running environment. The default value is 1024.
The readOnly property can be set to true or false. A read-only cache will return the same instance of cache objects to all callers. Therefore, these objects cannot be modified. This provides a considerable performance improvement. A read-write cache will (through serialization) returns a copy of the cached object. It is slower but safer, so the default value is false.
Tip: L2 cache is transactional. This means that when SqlSession completes and commits, or completes and rolls back, but the insert/delete/update statement with flushCache=true is not executed, the cache will be updated.
Use custom cache
In addition to the above customized caching methods, you can also completely override the caching behavior by implementing your own caching or creating adapters for other third-party caching schemes.
<cache type="com.domain.something.MyCustomCache"/>
This example shows how to use a custom cache implementation. The class specified by the type attribute must implement org apache. ibatis. cache. The cache interface and provides a constructor that accepts a String parameter as an id. This interface is one of many complex interfaces in the MyBatis framework, but the behavior is very simple.
public interface Cache { String getId(); int getSize(); void putObject(Object key, Object value); Object getObject(Object key); boolean hasKey(Object key); Object removeObject(Object key); void clear(); }
To configure your cache, simply add a public JavaBean attribute to your cache implementation, and then pass the attribute value through the cache element. For example, the following example will call a method called setCacheFile(String file) in your cache implementation:
<cache type="com.domain.something.MyCustomCache"> <property name="cacheFile" value="/tmp/my-custom-cache.tmp"/> </cache>
You can use all simple types as the types of JavaBean properties, and MyBatis will convert them. You can also use placeholders (such as ${cache.file}) to replace them with the values defined in the profile properties.
From version 3.4 2, MyBatis has supported an initialization method after all the properties are set up. If you want to use this feature, please implement org. In your custom cache class apache. ibatis. builder. Initializingobject interface.
public interface InitializingObject { void initialize() throws Exception; }
Tip the cache configuration in the previous section (such as purge policy, read / write, etc.) cannot be applied to custom cache.
Note that the cache configuration and cache instance are bound to the namespace of the SQL mapping file. Therefore, all statements and caches in the same namespace will be bound together through the namespace. Each statement can customize the way it interacts with the cache, or completely exclude them from the cache, which can be achieved by using two simple attributes on each statement. By default, the statement is configured as follows:
<select ... flushCache="false" useCache="true"/> <insert ... flushCache="true"/> <update ... flushCache="true"/> <delete ... flushCache="true"/>
Since this is the default behavior, it is obvious that you should never explicitly configure a statement in this way. However, if you want to change the default behavior, you only need to set the flushCache and useCache properties. For example, in some cases, you may want the results of a specific select statement to be excluded from the cache, or you may want a select statement to empty the cache. Similarly, you may want some update statements to execute without flushing the cache.
cache-ref
Recall the content in the previous section. For statements in a namespace, only the cache of that namespace will be used for caching or flushing. However, you may want to share the same cache configuration and instances in multiple namespaces. To achieve this requirement, you can use the cache ref element to reference another cache.
<cache-ref namespace="com.someone.application.data.SomeMapper"/>
Third party cache implementation - EhCache: View Baidu Encyclopedia
Ehcache is a widely used java distributed cache for general cache;
To use Ehcache in your application, you need to introduce dependent jar packages
<!-- https://mvnrepository.com/artifact/org.mybatis.caches/mybatis-ehcache --> <dependency> <groupId>org.mybatis.caches</groupId> <artifactId>mybatis-ehcache</artifactId> <version>1.1.0</version> </dependency>
In mapper Just use the corresponding cache in XML
<mapper namespace = "org.acme.FooMapper" > <cache type = "org.mybatis.caches.ehcache.EhcacheCache" /> </mapper>
Write ehcache XML file, if / ehcache.xml is not found when loading If there is a problem with the XML resource or, the default configuration will be used.
<?xml version="1.0" encoding="UTF-8"?> <ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd" updateCheck="false"> <!-- diskStore: Is the cache path, ehcache There are two levels: memory and disk. This attribute defines the cache location of the disk. The parameters are explained as follows: user.home – User home directory user.dir – User's current working directory java.io.tmpdir – Default temporary file path --> <diskStore path="./tmpdir/Tmp_EhCache"/> <defaultCache eternal="false" maxElementsInMemory="10000" overflowToDisk="false" diskPersistent="false" timeToIdleSeconds="1800" timeToLiveSeconds="259200" memoryStoreEvictionPolicy="LRU"/> <cache name="cloud_user" eternal="false" maxElementsInMemory="5000" overflowToDisk="false" diskPersistent="false" timeToIdleSeconds="1800" timeToLiveSeconds="1800" memoryStoreEvictionPolicy="LRU"/> <!-- defaultCache: Default cache policy, when ehcache This cache policy is used when the defined cache cannot be found. Only one can be defined. --> <!-- name:Cache name. maxElementsInMemory:Maximum number of caches maxElementsOnDisk: Maximum number of hard disk caches. eternal:Whether the object is permanently valid, but once it is set, timeout Will not work. overflowToDisk:Whether to save to disk when the system crashes timeToIdleSeconds:Sets the allowed idle time (in seconds) of an object before it expires. Only if eternal=false It is used when the object is not permanently valid. It is an optional attribute. The default value is 0, that is, the idle time is infinite. timeToLiveSeconds:Sets the allowable survival time (in seconds) of an object before expiration. The maximum time is between creation time and expiration time. Only if eternal=false Used when the object is not permanently valid. The default is 0.,That is, the survival time of the object is infinite. diskPersistent: Cache virtual machine restart data Whether the disk store persists between restarts of the Virtual Machine. The default value is false. diskSpoolBufferSizeMB: This parameter setting DiskStore(Cache size for disk cache). The default is 30 MB. each Cache Each should have its own buffer. diskExpiryThreadIntervalSeconds: The running time interval of disk failure thread is 120 seconds by default. memoryStoreEvictionPolicy: When reached maxElementsInMemory When restricted, Ehcache The memory will be cleaned according to the specified policy. The default policy is LRU(Least recently used). You can set it to FIFO(First in first out) or LFU(Less used). clearOnFlush: Whether to clear when the amount of memory is maximum. memoryStoreEvictionPolicy:The optional strategies are: LRU(Least recently used, default policy) FIFO(First in first out) LFU(Minimum number of visits). FIFO,first in first out,This is the most familiar, first in, first out. LFU, Less Frequently Used,This is the strategy used in the above example. To put it bluntly, it has always been the least used. As mentioned above, the cached element has a hit Properties, hit The smallest value will be flushed out of the cache. LRU,Least Recently Used,The least recently used cache element has a timestamp. When the cache capacity is full and it is necessary to make room for caching new elements, the element with the farthest timestamp from the current time in the existing cache element will be cleared out of the cache. --> </ehcache>