mybatis source code analysis extension of L2 cache

Posted by vikette on Wed, 26 Jan 2022 18:44:04 +0100

1, Review the contents of L2 cache

The L2 cache is built on the L1 cache. When receiving the query request, MyBatis will query the L2 cache first. If the L2 cache is not alive
In, query the level-1 cache again. If there is no level-1 cache, query the database again.
L2 cache - > L1 cache - > database.
Unlike the L1 Cache, the L2 Cache is bound to a specific namespace. There is a Cache in a Mapper, MappedStatement in the same Mapper shares a Cache, and the L1 Cache is bound to SqlSession.

If you want to continue to learn more, please see mybatis cache .

1. Start L2 cache

There are three steps:

1) Enable global L2 cache configuration

<settings>
   <setting name="cacheEnabled" value="true"/>
</settings>

2) Configure labels in Mapper configuration files that require L2 caching

<cache></cache>

3) Configure useCache=true on the specific CURD label

<select id="findById" resultType="com.demo.pojo.User" useCache="true">
    select * from user where id = #{id}
</select>

2, Source code analysis

1. Configuration initialization, label resolution

According to the previous mybatis source code analysis, xml parsing is mainly left to xmlconfigbuilder Parse () method.

1)XMLConfigBuilder

// parse()
public Configuration parse() {
    if (parsed) {
        throw new BuilderException("Each XMLConfigBuilder can only be used once.");
    }
    parsed = true;
    // here
    parseConfiguration(parser.evalNode("/configuration"));
    return configuration;
}

// parseConfiguration()
// Since it is added in xml, let's look directly at the parsing of mappers tags
private void parseConfiguration(XNode root) {
    try {
        // issue #117 read properties first
        propertiesElement(root.evalNode("properties"));
        Properties settings = settingsAsProperties(root.evalNode("settings"));
        loadCustomVfs(settings);
        loadCustomLogImpl(settings);
        typeAliasesElement(root.evalNode("typeAliases"));
        pluginElement(root.evalNode("plugins"));
        objectFactoryElement(root.evalNode("objectFactory"));
        objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
        reflectorFactoryElement(root.evalNode("reflectorFactory"));
        settingsElement(settings);
        // read it after objectFactory and objectWrapperFactory issue #631
        environmentsElement(root.evalNode("environments"));
        databaseIdProviderElement(root.evalNode("databaseIdProvider"));
        typeHandlerElement(root.evalNode("typeHandlers"));
        // Here it is
        mapperElement(root.evalNode("mappers"));
    } catch (Exception e) {
        throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " 
                + e, e);
    }
}

// mapperElement()
private void mapperElement(XNode parent) throws Exception {
    if (parent != null) {
        for (XNode child : parent.getChildren()) {
            if ("package".equals(child.getName())) {
                String mapperPackage = child.getStringAttribute("name");
                configuration.addMappers(mapperPackage);
            } else {
                String resource = child.getStringAttribute("resource");
                String url = child.getStringAttribute("url");
                String mapperClass = child.getStringAttribute("class");
                // According to the configuration of our example, we can directly go through the if judgment.
                if (resource != null && url == null && mapperClass == null) {
                    ErrorContext.instance().resource(resource);
                    InputStream inputStream = Resources.getResourceAsStream(resource);
                    XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, 
                            configuration, resource, configuration.getSqlFragments());
                    // Generate XMLMapperBuilder and execute its parse method.
                    mapperParser.parse();
                } else if (resource == null && url != null && mapperClass == null) {
                    ErrorContext.instance().resource(url);
                    InputStream inputStream = Resources.getUrlAsStream(url);
                    XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, 
                            configuration, url, configuration.getSqlFragments());
                    mapperParser.parse();
                } else if (resource == null && url == null && mapperClass != null) {
                    Class<?> mapperInterface = Resources.classForName(mapperClass);
                    configuration.addMapper(mapperInterface);
                } else {
                    throw new BuilderException("A mapper element may only specify a url,"
                            + " resource or class, but not more than one.");
                }
            }
        }
    }
}

2) Let's take a look at parsing mapper xml

XMLMapperBuilder

// parse()
public void parse() {
    if (!configuration.isResourceLoaded(resource)) {
        configurationElement(parser.evalNode("/mapper"));
        configuration.addLoadedResource(resource);
        bindMapperForNamespace();
    }

    parsePendingResultMaps();
    parsePendingCacheRefs();
    parsePendingStatements();
}

// configurationElement()
private void configurationElement(XNode context) {
    try {
        String namespace = context.getStringAttribute("namespace");
        if (namespace == null || namespace.isEmpty()) {
            throw new BuilderException("Mapper's namespace cannot be empty");
        }
        builderAssistant.setCurrentNamespace(namespace);
        cacheRefElement(context.evalNode("cache-ref"));
        // Finally, I see the processing of cache attributes here
        cacheElement(context.evalNode("cache"));
        parameterMapElement(context.evalNodes("/mapper/parameterMap"));
        resultMapElements(context.evalNodes("/mapper/resultMap"));
        sqlElement(context.evalNodes("/mapper/sql"));
        // Here, the generated Cache will be wrapped into the corresponding MappedStatement
        buildStatementFromContext(context.evalNodes("select|insert|update|delete"));
    } catch (Exception e) {
        throw new BuilderException("Error parsing Mapper XML. The XML location is '" 
                + resource + "'. Cause: " + e, e);
    }
}

// cacheElement()
private void cacheElement(XNode context) {
    if (context != null) {
        // Resolve the type attribute of < cache / > tag. Here we can customize the implementation class of cache, such as redisCache,
        // If there is no customization, the same as the L1 cache is used here.
        String type = context.getStringAttribute("type", "PERPETUAL");
        Class<? extends Cache> typeClass = typeAliasRegistry.resolveAlias(type);
        String eviction = context.getStringAttribute("eviction", "LRU");
        Class<? extends Cache> evictionClass = typeAliasRegistry.resolveAlias(eviction);
        Long flushInterval = context.getLongAttribute("flushInterval");
        Integer size = context.getIntAttribute("size");
        boolean readWrite = !context.getBooleanAttribute("readOnly", false);
        boolean blocking = context.getBooleanAttribute("blocking", false);
        Properties props = context.getChildrenAsProperties();
        // Building Cache objects
        builderAssistant.useNewCache(typeClass, evictionClass, flushInterval, size, 
                readWrite, blocking, props);
  }
}

3) See how to build Cache objects

public Cache useNewCache(Class<? extends Cache> typeClass, 
        Class<? extends Cache> evictionClass, Long flushInterval, 
        Integer size, boolean readWrite, boolean blocking,
        Properties props) {
    // 1. Generate Cache object
    Cache cache = new CacheBuilder(currentNamespace)
            // Here, if we define the type in < Cache / >, we use the custom Cache, 
            // Otherwise, use the same PerpetualCache as the L1 cache.
            .implementation(valueOrDefault(typeClass, PerpetualCache.class))
            .addDecorator(valueOrDefault(evictionClass, LruCache.class))
            .clearInterval(flushInterval)
            .size(size)
            .readWrite(readWrite)
            .blocking(blocking)
            .properties(props)
            .build();
    // 2. Add to Configuration
    configuration.addCache(cache);
    // 3. And assign the cache to mapperbuilderassistant currentCache
    currentCache = cache;
    return cache;
}

We see a mapper XML parses the tag only once, that is, it creates a cache object only once, puts it into the configuration, and assigns the cache to mapperbuilderassistant currentCache.

4)buildStatementFromContext(context.evalNodes(“select|insert|update|delete”)); Wrap Cache into MappedStatement

XMLMapperBuilder

// buildStatementFromContext()
private void buildStatementFromContext(List<XNode> list) {
    if (configuration.getDatabaseId() != null) {
        buildStatementFromContext(list, configuration.getDatabaseId());
    }
    buildStatementFromContext(list, null);
}

// buildStatementFromContext()
private void buildStatementFromContext(List<XNode> list, String requiredDatabaseId) {
    for (XNode context : list) {
        final XMLStatementBuilder statementParser = new XMLStatementBuilder(
                configuration, builderAssistant, context, requiredDatabaseId);
        try {
            // Each execution statement is converted into a MappedStatement
            statementParser.parseStatementNode();
        } catch (IncompleteElementException e) {
            configuration.addIncompleteStatement(statementParser);
        }
    }
}

XMLStatementBuilder#parseStatementNode

public void parseStatementNode() {
    String id = context.getStringAttribute("id");
    String databaseId = context.getStringAttribute("databaseId");

    if (!databaseIdMatchesCurrent(id, databaseId, this.requiredDatabaseId)) {
        return;
    }

    String nodeName = context.getNode().getNodeName();
    SqlCommandType sqlCommandType = SqlCommandType.valueOf(nodeName
            .toUpperCase(Locale.ENGLISH));
    boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
    boolean flushCache = context.getBooleanAttribute("flushCache", !isSelect);
    boolean useCache = context.getBooleanAttribute("useCache", isSelect);
    boolean resultOrdered = context.getBooleanAttribute("resultOrdered", false);

    // Include Fragments before parsing
    XMLIncludeTransformer includeParser = new XMLIncludeTransformer(configuration, 
            builderAssistant);
    includeParser.applyIncludes(context.getNode());

    String parameterType = context.getStringAttribute("parameterType");
    Class<?> parameterTypeClass = resolveClass(parameterType);

    String lang = context.getStringAttribute("lang");
    LanguageDriver langDriver = getLanguageDriver(lang);

    // Parse selectKey after includes and remove them.
    processSelectKeyNodes(id, parameterTypeClass, langDriver);

    // Parse the SQL (pre: <selectKey> and <include> were parsed and removed)
    KeyGenerator keyGenerator;
    String keyStatementId = id + SelectKeyGenerator.SELECT_KEY_SUFFIX;
    keyStatementId = builderAssistant.applyCurrentNamespace(keyStatementId, true);
    if (configuration.hasKeyGenerator(keyStatementId)) {
        keyGenerator = configuration.getKeyGenerator(keyStatementId);
    } else {
        keyGenerator = context.getBooleanAttribute("useGeneratedKeys",
            configuration.isUseGeneratedKeys() && 
            SqlCommandType.INSERT.equals(sqlCommandType)) 
            ? Jdbc3KeyGenerator.INSTANCE : NoKeyGenerator.INSTANCE;
    }

    SqlSource sqlSource = langDriver.createSqlSource(configuration, context, 
            parameterTypeClass);
    StatementType statementType = StatementType.valueOf(context
            .getStringAttribute("statementType", StatementType.PREPARED.toString()));
    Integer fetchSize = context.getIntAttribute("fetchSize");
    Integer timeout = context.getIntAttribute("timeout");
    String parameterMap = context.getStringAttribute("parameterMap");
    String resultType = context.getStringAttribute("resultType");
    Class<?> resultTypeClass = resolveClass(resultType);
    String resultMap = context.getStringAttribute("resultMap");
    String resultSetType = context.getStringAttribute("resultSetType");
    ResultSetType resultSetTypeEnum = resolveResultSetType(resultSetType);
    if (resultSetTypeEnum == null) {
        resultSetTypeEnum = configuration.getDefaultResultSetType();
    }
    String keyProperty = context.getStringAttribute("keyProperty");
    String keyColumn = context.getStringAttribute("keyColumn");
    String resultSets = context.getStringAttribute("resultSets");

    // Create MappedStatement object
    builderAssistant.addMappedStatement(id, sqlSource, statementType, sqlCommandType, 
            fetchSize, timeout, parameterMap, parameterTypeClass, resultMap, 
            resultTypeClass,resultSetTypeEnum, flushCache, useCache, resultOrdered,
            keyGenerator, keyProperty, keyColumn, databaseId, langDriver, resultSets);
}

MapperBuilderAssistant#addMappedStatement

public MappedStatement addMappedStatement(String id, SqlSource sqlSource,
        StatementType statementType, SqlCommandType sqlCommandType, Integer fetchSize,
        Integer timeout, String parameterMap, Class<?> parameterType, String resultMap,
        Class<?> resultType, ResultSetType resultSetType, boolean flushCache, 
        boolean useCache, boolean resultOrdered, KeyGenerator keyGenerator,
        String keyProperty, String keyColumn, String databaseId, LanguageDriver lang,
        String resultSets) {

    if (unresolvedCacheRef) {
        throw new IncompleteElementException("Cache-ref not yet resolved");
    }

    id = applyCurrentNamespace(id, false);
    boolean isSelect = sqlCommandType == SqlCommandType.SELECT;

    // Create MappedStatement object
    MappedStatement.Builder statementBuilder = new MappedStatement.Builder(configuration, 
            id, sqlSource, sqlCommandType)
            .resource(resource).fetchSize(fetchSize).timeout(timeout)
            .statementType(statementType).keyGenerator(keyGenerator)
            .keyProperty(keyProperty).keyColumn(keyColumn).databaseId(databaseId)
            .lang(lang).resultOrdered(resultOrdered).resultSets(resultSets)
            .resultMaps(getStatementResultMaps(resultMap, resultType, id))
            .resultSetType(resultSetType)
            .flushCacheRequired(valueOrDefault(flushCache, !isSelect))
            .useCache(valueOrDefault(useCache, isSelect))
            .cache(currentCache);// Here, the previously generated Cache is encapsulated into MappedStatement

    ParameterMap statementParameterMap = getStatementParameterMap(parameterMap, 
            parameterType, id);
    if (statementParameterMap != null) {
        statementBuilder.parameterMap(statementParameterMap);
    }

    MappedStatement statement = statementBuilder.build();
    configuration.addMappedStatement(statement);
    return statement;
}

We can see that the Cache object created in Mapper is added to each MappedStatement object, that is, all MappedStatement objects in the same Mapper have the same Cache object.

This is the end of label resolution during configuration initialization.

2. When calling query statements, source code analysis

CachingExecutor#query

@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, 
        ResultHandler resultHandler) throws SQLException {
    BoundSql boundSql = ms.getBoundSql(parameterObject);
    // Create CacheKey
    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}

@Override
public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, 
        ResultHandler resultHandler, CacheKey key, BoundSql boundSql) 
        throws SQLException {
    // Get the Cache from MappedStatement. Note that the Cache here is obtained from MappedStatement.
    // That is, we created it in the < cache / > tag in Mapper, which is saved in the configuration.
    // When parsing xml above, we analyzed that each MappedStatement has a Cache object, which is here.
    Cache cache = ms.getCache();
    // If < cache > is not configured in the configuration file, the cache is empty.
    if (cache != null) {
        // If you need to refresh the cache, refresh it: flushCache="true".
        flushCacheIfRequired(ms);
        if (ms.isUseCache() && resultHandler == null) {
            ensureNoOutParams(ms, boundSql);
            // Accessing L2 cache
            @SuppressWarnings("unchecked")
            List<E> list = (List<E>) tcm.getObject(cache, key);
            // Cache Miss 
            if (list == null) {
                // If there is no value, the query will be executed. In fact, this query is also the first level cache query,
                // If there is no level-1 cache, query the DB.
                list = delegate.query(ms, parameterObject, rowBounds, resultHandler, 
                        key, boundSql);
                // Cache query results
                tcm.putObject(cache, key, list); // issue #578 and #116
            }
            return list;
        }
    }
    return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
}

If flushCache = "true" is set, the cache will be refreshed for each query.

<!-- Execute this statement to empty the cache -->
<select id="findbyId" resultType="com.demo.pojo.user" useCache="true" flushCache="true">
    select * from t_demo
</select>

As mentioned above, note that the L2 cache is obtained from MappedStatement.
Because MappedStatement exists in the global configuration, it can be obtained by multiple cacheingexecutors, which will lead to thread safety problems.
In addition, if not controlled, multiple transactions share a cache instance, which will lead to dirty reading problems.
As for the dirty reading problem, it needs to be handled with the help of other classes, that is, the type corresponding to the tcm variable in the above code.
Let's analyze it.

3. Transaction related cache processing, source code analysis

1)TransactionalCacheManager

/**
 * Transaction cache manager
 */
public class TransactionalCacheManager {

    // Mapping table between Cache and TransactionalCache
    private final Map<Cache, TransactionalCache> transactionalCaches = new HashMap<>();

    public void clear(Cache cache) {
        // Get the TransactionalCache object and call the clear method of the object, the same below.
        getTransactionalCache(cache).clear();
    }

    public Object getObject(Cache cache, CacheKey key) {
        // Get cache directly from TransactionalCache
        return getTransactionalCache(cache).getObject(key);
    }

    public void putObject(Cache cache, CacheKey key, Object value) {
        // Directly stored in the cache of TransactionalCache
        getTransactionalCache(cache).putObject(key, value);
    }

    public void commit() {
        for (TransactionalCache txCache : transactionalCaches.values()) {
            txCache.commit();
        }
    }

    public void rollback() {
        for (TransactionalCache txCache : transactionalCaches.values()) {
            txCache.rollback();
        }
    }

    private TransactionalCache getTransactionalCache(Cache cache) {
        // Get TransactionalCache from mapping table
        // TransactionalCache is also a decorative class that adds transaction functions to the Cache
        // Create a new TransactionalCache and save the real Cache object
        return transactionalCaches.computeIfAbsent(cache, TransactionalCache::new);
    }
}

TransactionalCacheManager internally maintains the mapping relationship between Cache instances and TransactionalCache instances. This class is only responsible for maintaining the mapping relationship between them. TransactionalCache is the real thing.

2)TransactionalCache

TransactionalCache is a Cache decorator that can add transaction functions to Cache instances.
The dirty reading problem I mentioned earlier is handled by this class. Let's analyze the logic of this class.

public class TransactionalCache implements Cache {

    private static final Log log = LogFactory.getLog(TransactionalCache.class);

    // The real Cache object is the same as the Cache in map < Cache, transactionalcache > above.
    private final Cache delegate;
    private boolean clearOnCommit;
    // Before the transaction is committed, the results of all queries from the database will be cached in this collection.
    private final Map<Object, Object> entriesToAddOnCommit;
    // Before the transaction is committed, when the cache misses, the CacheKey will be stored in this collection.
    private final Set<Object> entriesMissedInCache;

    public TransactionalCache(Cache delegate) {
        this.delegate = delegate;
        this.clearOnCommit = false;
        this.entriesToAddOnCommit = new HashMap<>();
        this.entriesMissedInCache = new HashSet<>();
    }

    @Override
    public String getId() {
        return delegate.getId();
    }

    @Override
    public int getSize() {
        return delegate.getSize();
    }

    @Override
    public Object getObject(Object key) {
        // issue #116
        // When querying, you query directly from delegate, that is, from the real cache object.
        Object object = delegate.getObject(key);
        if (object == null) {
            // If the cache misses, the key is stored in entriesMissedInCache.
            entriesMissedInCache.add(key);
        }
        // issue #146
        if (clearOnCommit) {
            return null;
        } else {
            return object;
        }
    }

    @Override
    public void putObject(Object key, Object object) {
        // Store the key value pairs into the entriesToAddOnCommit Map instead of the real cache object delegate.
        entriesToAddOnCommit.put(key, object);
    }

    @Override
    public Object removeObject(Object key) {
        return null;
    }

    @Override
    public void clear() {
        clearOnCommit = true;
        // Empty entriesToAddOnCommit, but not the delegate cache.
        entriesToAddOnCommit.clear();
    }

    public void commit() {
        // Determine whether to clear the delegate according to the value of clearOnCommit.
        if (clearOnCommit) {
            delegate.clear();
        }
        
        // Flushes uncached results into the delegate cache.
        flushPendingEntries();
        // Reset entriesToAddOnCommit and entriesMissedInCache
        reset();
    }

    public void rollback() {
        unlockMissedEntries();
        reset();
    }

    private void reset() {
        clearOnCommit = false;
        // Empty collection
        entriesToAddOnCommit.clear();
        entriesMissedInCache.clear();
    }

    private void flushPendingEntries() {
        for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
            // Transfer the contents of entriesToAddOnCommit to delegate.
            delegate.putObject(entry.getKey(), entry.getValue());
        }
        for (Object entry : entriesMissedInCache) {
            if (!entriesToAddOnCommit.containsKey(entry)) {
                // Store null value
                delegate.putObject(entry, null);
            }
        }
    }

    private void unlockMissedEntries() {
        for (Object entry : entriesMissedInCache) {
            try {
                // Call removeObject to unlock
                delegate.removeObject(entry);
            } catch (Exception e) {
                log.warn("Unexpected exception while notifiying a rollback to " 
                        + "the cache adapter. Consider upgrading your cache adapter" 
                        + " to the latest version. Cause: " + e);
            }
        }
    }
}

When storing L2 cache objects, they are placed in transactionalcache Entriestoaddoncommit is in this map, but each time you query, it is directly from transactionalcache It is queried in delegate, so the setting of cache value does not take effect immediately after the secondary cache queries the database, mainly because saving directly to delegate will lead to dirty data problems.

3)SqlSession

Why only after SqlSession is submitted or closed?
Let's take a look at sqlsession What the commit () method does.

// DefaultSqlSession.commit
@Override
public void commit(boolean force) {
    try {
        // Mainly this sentence
        executor.commit(isCommitOrRollbackRequired(force));
        dirty = false;
    } catch (Exception e) {
        throw ExceptionFactory.wrapException("Error committing transaction.  Cause: " 
                + e, e);
    } finally {
        ErrorContext.instance().reset();
    }
}

// CachingExecutor.commit
@Override
public void commit(boolean required) throws SQLException {
    delegate.commit(required);
    // here
    tcm.commit();
}

// TransactionalCacheManager.commit()
public void commit() {
    for (TransactionalCache txCache : transactionalCaches.values()) {
        // here
        txCache.commit();
    }
}

// TransactionalCache.commit()
public void commit() {
    if (clearOnCommit) {
        delegate.clear();
    }
    // This sentence
    flushPendingEntries();
    reset();
}

// TransactionalCache.flushPendingEntries()
private void flushPendingEntries() {
    for (Map.Entry<Object, Object> entry : entriesToAddOnCommit.entrySet()) {
        // Here, the objects of entriesToAddOnCommit are added to the delegate one by one,
        // Only then will the L2 cache really take effect
        delegate.putObject(entry.getKey(), entry.getValue());
    }
    for (Object entry : entriesMissedInCache) {
        if (!entriesToAddOnCommit.containsKey(entry)) {
            delegate.putObject(entry, null);
        }
    }
}

4. Update operation, refresh of L2 cache, source code analysis

Let's take a look at the update operation of SqlSession.

// DefaultSqlSession.update()
@Override
public int update(String statement, Object parameter) {
    try {
        dirty = true;
        MappedStatement ms = configuration.getMappedStatement(statement);
        return executor.update(ms, wrapCollection(parameter));
    } catch (Exception e) {
        throw ExceptionFactory.wrapException("Error updating database.  Cause: " + e, e);
    } finally {
        ErrorContext.instance().reset();
    }
}

// CachingExecutor.update()
@Override
public int update(MappedStatement ms, Object parameterObject) throws SQLException {
    flushCacheIfRequired(ms);
    return delegate.update(ms, parameterObject);
}

// CachingExecutor.flushCacheIfRequired()
private void flushCacheIfRequired(MappedStatement ms) {
    // Get the Cache corresponding to MappedStatement and empty it.
    Cache cache = ms.getCache();
    // SQL needs to set flushCache="true" before emptying.
    if (cache != null && ms.isFlushCacheRequired()) {
        tcm.clear(cache);
    }
}

MyBatis L2 cache is only applicable to infrequently added, deleted and modified data, such as street data of national administrative regions, provinces, cities and towns.
Once the data changes, MyBatis will empty the cache. Therefore, L2 cache is not suitable for data that is updated frequently.

3, Summary

In the design of L2 Cache, MyBatis makes a lot of use of decorator mode, such as caching executor and decorators of various Cache interfaces.

  • The second level cache realizes the cache data sharing between sqlsessions, which belongs to the namespace level;
  • L2 cache has rich cache strategies;
  • The L2 cache can be composed of multiple decorators combined with the basic cache;
  • The L2 cache is completed by a cache decoration executor, cacheingexecution, and a transactional cache.

Output source of article content: pull hook education Java high salary training camp;

Topics: Java Mybatis Cache source code