This project is not only to practice, but also to simplify personal work. Looking back, in fact, the front-end takes a long time. In fact, the mature framework of elementadmin can be used.
Main technical points of the project:
- Dynamic data source, dynamic addition of database, dynamic switching of query data after addition
- Customize the cache, start the scheduled task to clean up the cache regularly, and use aop to customize the interception query and put it into the cache
- Embedded database derby
- Multithreading queries data asynchronously, and each thread uses different data sources
- aop obtains the remote processing result and performs secondary processing
- Server access IP whitelist
- Unified exception handling
- Permission control
- Front and rear end separation
- excel export
- Some experience of file download
Project technology: the backend uses springboot2.0 1.0. RELEASE+mybatis2. 1.4 + Druid + Derby + POI + JWT + Lombok. The front end mainly uses vue+elementui+markdown
catalogue
4. Multi thread asynchronous query data, and each thread uses different data sources
5.aop obtains remote processing results and performs secondary processing
6. Server access IP white list
9. Front and rear end separation
11. Some experience of file download
1. Dynamic data source: the database is added dynamically. After adding, the query data is switched dynamically
This is the core of this project, because all other queries, including multiple data queries, depend on the addition of this data source. The core is to use mybatis dynamic data source AbstractRoutingDataSource. This class needs to be customized. This is an abstract class and defined as the data source datasource of the system. This abstract class mainly implements one method, that is
protected abstract Object determineCurrentLookupKey();
This method is to determine which data source to use
Look at the source code:
/** * Retrieve the current target DataSource. Determines the * {@link #determineCurrentLookupKey() current lookup key}, performs * a lookup in the {@link #setTargetDataSources targetDataSources} map, * falls back to the specified * {@link #setDefaultTargetDataSource default target DataSource} if necessary. * @see #determineCurrentLookupKey() */ protected DataSource determineTargetDataSource() { Assert.notNull(this.resolvedDataSources, "DataSource router not initialized"); Object lookupKey = determineCurrentLookupKey(); DataSource dataSource = this.resolvedDataSources.get(lookupKey); if (dataSource == null && (this.lenientFallback || lookupKey == null)) { dataSource = this.resolvedDefaultDataSource; } if (dataSource == null) { throw new IllegalStateException("Cannot determine target DataSource for lookup key [" + lookupKey + "]"); } return dataSource; }
If the data source cannot be found according to this method, the default data source will be found
Therefore, we need to define a default data source. This class is complete as follows
public class SystemDynamicDataSource extends AbstractRoutingDataSource { private Map<Object,Object> dataSourceMap; public static final ThreadLocal<Integer> DATA_SOURCE = new ThreadLocal<>(); private static SystemDynamicDataSource systemDynamicDataSource; @Autowired public void setSystemDynamicDataSource(SystemDynamicDataSource value) { systemDynamicDataSource = value; } public SystemDynamicDataSource(DataSource dataSource, Map<Object, Object> dataSourceMap){ this.dataSourceMap = dataSourceMap; // Default data source super.setDefaultTargetDataSource(dataSource); // map of multiple data sources. master : jdbc:mysql://182.92.236.164:3306 // salve : jdbc:mysql://182.92.236.164:3307 super.setTargetDataSources(dataSourceMap); super.afterPropertiesSet(); } public static void setDataSource(Integer key, DataSource dataSource){ DruidDataSource oldDataSource = (DruidDataSource) systemDynamicDataSource.dataSourceMap.put(key, dataSource); if (oldDataSource != null) { oldDataSource.close(); } systemDynamicDataSource.afterPropertiesSet(); } public static void removeDataSource(Integer key){ DruidDataSource oldDataSource = (DruidDataSource) systemDynamicDataSource.dataSourceMap.remove(key); if (oldDataSource != null) { oldDataSource.close(); } systemDynamicDataSource.afterPropertiesSet(); } public static boolean isExist(Integer key){ return systemDynamicDataSource.dataSourceMap.get(key) != null; } @Override protected Object determineCurrentLookupKey() { return DATA_SOURCE.get(); } public static void setDataSource(Integer dataSource){ DATA_SOURCE.set(dataSource); } public static void removeDataSource(){ DATA_SOURCE.remove(); } }
Generally, when we use this method to find value according to the key, we usually think of using Map, but the most important thing is that this method has no parameters and there is no way to pass the key, so we use ThreadLocal, so we can confirm the value of the current thread and then find the data source.
Data sources can be added dynamically. Look at the source code. When selecting a data source, it is found from the member variable resolvedDataSources. Therefore, to dynamically add a data source, it is finally added to the map. However, this variable is a private member and cannot be obtained, but it is added according to the method, This method is public void afterpropertieset(); It is a member variable of the map targetDataSources that adds valid values to resolvedDataSources. Therefore, when implementing, we should not only set targetDataSources, but also manually call afterpropertieset() to send the values to resolvedDataSources. In order to facilitate the dynamic addition later, I extract targetDataSources as a member variable.
Look at the source code
public void afterPropertiesSet() { if (this.targetDataSources == null) { throw new IllegalArgumentException("Property 'targetDataSources' is required"); } this.resolvedDataSources = new HashMap<>(this.targetDataSources.size()); this.targetDataSources.forEach((key, value) -> { Object lookupKey = resolveSpecifiedLookupKey(key); DataSource dataSource = resolveSpecifiedDataSource(value); this.resolvedDataSources.put(lookupKey, dataSource); }); if (this.defaultTargetDataSource != null) { this.resolvedDefaultDataSource = resolveSpecifiedDataSource(this.defaultTargetDataSource); } }
After that, it's easy to declare it in the configuration class, but you need to cancel the automatic configuration of the startup class, DataSourceAutoConfiguration
Look at the code:
@Configuration public class SystemDynamicDataSourceConfig { @Bean("systemDataSource") @ConfigurationProperties("spring.datasource.druid")// Get the value under this attribute in the configuration file and create a data source public DataSource systemDataSource(){ DruidDataSource ds = new DruidDataSource(); return ds; } @Primary // The master data source will report an error if it is not added @Bean //@Qualifier DataSource has multiple objects, and the value is the beanName of the object public SystemDynamicDataSource dataSource(@Qualifier("systemDataSource") DataSource systemDataSource){ Map<Object, Object> map = new ConcurrentHashMap<>(); return new SystemDynamicDataSource(systemDataSource, map); } }
Configuration cancellation:
@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class}) public class IMWebApplication { public static void main(String[] args) { SpringApplication.run(IMWebApplication.class,args); } }
Enumeration of databases to identify which databases are added
public enum DataTypeEnum { /** mysql */ MYSQL("mysql", "mysql", "com.mysql.cj.jdbc.Driver", "`", "`", "'", "'"), /** oracle */ ORACLE("oracle", "oracle", "oracle.jdbc.OracleDriver", "\"", "\"", "\"", "\""); private String feature; private String desc; private String driver; private String keywordPrefix; private String keywordSuffix; private String aliasPrefix; private String aliasSuffix; private static final String JDBC_URL_PREFIX = "jdbc:"; DataTypeEnum(String feature, String desc, String driver, String keywordPrefix, String keywordSuffix, String aliasPrefix, String aliasSuffix) { this.feature = feature; this.desc = desc; this.driver = driver; this.keywordPrefix = keywordPrefix; this.keywordSuffix = keywordSuffix; this.aliasPrefix = aliasPrefix; this.aliasSuffix = aliasSuffix; } public static DataTypeEnum urlOf(String jdbcUrl) { String url = jdbcUrl.toLowerCase().trim(); for (DataTypeEnum dataTypeEnum : values()) { if (url.startsWith(JDBC_URL_PREFIX + dataTypeEnum.feature)) { try { Class<?> aClass = Class.forName(dataTypeEnum.getDriver()); if (null == aClass) { throw new SystemInvalidException("Database driver not found: " + jdbcUrl); } } catch (ClassNotFoundException e) { throw new SystemInvalidException("Database driver not found: " + jdbcUrl); } return dataTypeEnum; } } return null; } public String getFeature() { return feature; } public String getDesc() { return desc; } public String getDriver() { return driver; } }
Then there is the database tool class, which adds or tests the use of sql
@Slf4j public class DBUtils { /** * Get data source * * @param jdbcUrl / * @param userName / * @param password / * @return DataSource */ public static DataSource getDataSource(String jdbcUrl, String userName, String password) { return getDataSource(jdbcUrl,userName,password,5,5,10); } private static DruidDataSource getDataSource(String jdbcUrl, String userName, String password, int minIdle, int initial, int maxActive) { DruidDataSource druidDataSource = new DruidDataSource(); String className; try { className = DriverManager.getDriver(jdbcUrl.trim()).getClass().getName(); } catch (SQLException e) { throw new SystemInvalidException("Unrecognized database driver: " + jdbcUrl); } if (StringUtils.isEmpty(className)) { DataTypeEnum dataTypeEnum = DataTypeEnum.urlOf(jdbcUrl); if (null == dataTypeEnum) { throw new SystemInvalidException("Unsupported database driver: jdbcUrl=" + jdbcUrl); } druidDataSource.setDriverClassName(dataTypeEnum.getDriver()); } else { druidDataSource.setDriverClassName(className); } druidDataSource.setUrl(jdbcUrl); druidDataSource.setUsername(userName); druidDataSource.setPassword(password); // Configure the timeout time for getting connections druidDataSource.setMaxWait(5000); // Configure initialization size, minimum and maximum druidDataSource.setInitialSize(initial); druidDataSource.setMinIdle(minIdle); druidDataSource.setMaxActive(maxActive); // If the link is abnormal, it will be judged as failure directly instead of trying all the time. Try again druidDataSource.setConnectionErrorRetryAttempts(1); druidDataSource.setBreakAfterAcquireFailure(true); try { druidDataSource.init(); } catch (SQLException e) { druidDataSource.close(); throw new SystemInvalidException("Database connection pool initialization failed" + e.getMessage(), e); } return druidDataSource; } /** * Test connection * @param jdbcUrl * @param userName * @param password * @return */ public static boolean testConnection(String jdbcUrl, String userName, String password) { DruidDataSource dataSource = getDataSource(jdbcUrl, userName, password,1,1,1); Connection connection = null; try { connection = dataSource.getConnection(); if (null != connection) { return true; } } catch (Exception e) { throw new SystemInvalidException("connection failed: " + e.getMessage()); } finally { try { releaseConnection(connection); }catch (Exception e) { throw new SystemInvalidException("Connection failed:" + e.getMessage()); } dataSource.close(); } return false; } /** * Release connection * @param connection */ private static void releaseConnection(Connection connection) throws SQLException { if (null != connection) { connection.close(); } } public static List<SystemDatabase> getSystemDatabaseByCache(){ return getSystemDatabaseByCache(-1); } public static List<SystemDatabase> getSystemDatabaseByCache(int type){ List<SystemDatabase> list = new ArrayList<>(); ConcurrentHashMap<String, SystemCacheUtils.SystemCache> dbcached = SystemCacheUtils.getCachedMapByName(Dict.SYSTEMDB_CACHE_KEY); if(dbcached == null || dbcached.size() == 0){ return list; } dbcached.values().forEach(x->{ SystemDatabase sb = new SystemDatabase(); BeanUtils.copyProperties(x.getCache(),sb); list.add(sb); }); switch (type) { case 0 : return list.stream().filter(x->x.getIsMDb() == 0).collect(Collectors.toList()); case 1 : return list.stream().filter(x->x.getIsMDb() == 1).collect(Collectors.toList()); case 2 : return list.stream().filter(x->x.getIsMDb() == 2).collect(Collectors.toList()); default: return list; } } }
For druiddatasource that appears more than once in the code close(); This method is because when I look at the heap memory, I find that every garbage collection will not be recycled, that is, his reference is still there, which is not very good, so I call this method and turn off what I don't add to the database, so it can be recycled.
Incidentally, when connecting with oracle, urk can be written in two ways, sid format and ServiceName format
sid format is JDBC: Oracle: thin: @ < host >: < port >: < sid >
ServiceName format is JDBC: Oracle: thin: @ / / < host >: < port > / < service_ name>
2. Customize the cache, start the scheduled task to clean up the cache regularly, and use aop to customize the interception query and put it into the cache
Cache defines a tool class. Record it. There's nothing to talk about here
public class SystemCacheUtils { private static final Logger log = LoggerFactory.getLogger(SystemCacheUtils.class); private static final int MAX_SIZE = 50000; private static ConcurrentHashMap<String,ConcurrentHashMap<String, SystemCache>> cacheData = new ConcurrentHashMap<>(); private static final String DEFAULT_NAME = "SYSTEM_DEFAULT_NAME"; public static Collection<ConcurrentHashMap<String, SystemCache>> getAllCaches(){ return cacheData.values(); } public static void set(String name, String key, Object value, long expiredTime) { if(StringUtils.isEmpty(name) || StringUtils.isEmpty(key) || StringUtils.isEmpty(value)) { throw new SystemInvalidException("Cannot set null key Or empty value"); } ConcurrentHashMap<String, SystemCache> systemSystemCacheConcurrentHashMap = cacheData.get(name); if(systemSystemCacheConcurrentHashMap == null){ systemSystemCacheConcurrentHashMap = new ConcurrentHashMap<>(16); cacheData.put(name, systemSystemCacheConcurrentHashMap); } else { clearCacheMemory(name, systemSystemCacheConcurrentHashMap); } SystemCache systemCache = new SystemCache(); systemCache.setCache(value); systemCache.setCurrentTime(System.currentTimeMillis()); systemCache.setExpiredTime(expiredTime <= 0L ? 0L : (System.currentTimeMillis() + expiredTime)); systemSystemCacheConcurrentHashMap.put(key,systemCache); } private static void clearCacheMemory(String name, ConcurrentHashMap<String, SystemCache> systemSystemCacheConcurrentHashMap) { if(Dict.CACHE_KEY modify.equals(name) || Dict.CACHE_KEY modify.equals(name) || Dict.SYSTEMDB_CACHE_KEY.equals(name)) return; if(systemSystemCacheConcurrentHashMap.size()> MAX_SIZE) { systemSystemCacheConcurrentHashMap.forEach((key,value)->{ if (value.isExpired()) { systemSystemCacheConcurrentHashMap.remove(key); } }); } if(systemSystemCacheConcurrentHashMap.size()> MAX_SIZE) { TreeMap<Long,String> treeMap = new TreeMap<>(); systemSystemCacheConcurrentHashMap.forEach((key, value) -> { treeMap.put(value.currentTime,key); }); treeMap.forEach((key, value) -> { if(systemSystemCacheConcurrentHashMap.size()> MAX_SIZE) { systemSystemCacheConcurrentHashMap.remove(value); } }); } } public static void set(String key, Object value) { set(DEFAULT_NAME, key, value, 0L); } public static void set(String key, Object value, long expiredTime) { set(DEFAULT_NAME, key, value, expiredTime); } public static void set(String name, String key, Object value) { set(name, key, value, 0L); } public static <T> T get(String name, String key) { if(StringUtils.isEmpty(name) || StringUtils.isEmpty(key)) return null; ConcurrentHashMap<String, SystemCache> systemSystemCacheConcurrentHashMap = cacheData.get(name); if (systemSystemCacheConcurrentHashMap == null) return null; SystemCache systemCache = systemSystemCacheConcurrentHashMap.get(key); if (systemCache == null) { return null; } if (systemCache.isExpired()) { systemSystemCacheConcurrentHashMap.remove(key); return null; } return (T)systemCache.getCache(); } public static <T> T get(String key) { return get(DEFAULT_NAME, key); } public static ConcurrentHashMap<String, SystemCache> getCachedMapByName(String name) { return cacheData.get(name); } public static void remove(String name, String key) { ConcurrentHashMap<String, SystemCache> stringSystemCacheConcurrentHashMap = cacheData.get(name); if(stringSystemCacheConcurrentHashMap != null) { stringSystemCacheConcurrentHashMap.remove(key); } } public static void renew(String name, String key, long renewtime){ if(StringUtils.isEmpty(name) || StringUtils.isEmpty(key)) return; ConcurrentHashMap<String, SystemCache> systemSystemCacheConcurrentHashMap = cacheData.get(name); if (systemSystemCacheConcurrentHashMap == null) return; SystemCache systemCache = systemSystemCacheConcurrentHashMap.get(key); if (systemCache == null) { return; } systemCache.setExpiredTime(System.currentTimeMillis() + renewtime); } public static void renew(String key, long renewtime){ renew(DEFAULT_NAME, key, renewtime); } public static void remove(String key) { remove(DEFAULT_NAME, key); } @Data public static class SystemCache{ private Object cache; private long expiredTime; private long currentTime; public boolean isExpired(){ if (expiredTime <= 0L) { return false; } return expiredTime - System.currentTimeMillis() <= 0; } } }
Then define a scheduled task, scheduled cleanup
@Slf4j @EnableScheduling @Configuration public class SystemCacheConfig { @Scheduled(fixedRate=10 * 60 * 1000) public void fixedClearExpiredKeys(){ log.info("-------------Start cleaning---------------------"); for (ConcurrentHashMap<String, SystemCacheUtils.SystemCache> cache : SystemCacheUtils.getAllCaches()) { if (cache == null) continue; cache.forEach((key,value)->{ if (value.isExpired()) { cache.remove(key); log.info("-------------"+key+" value "+value+"Be cleaned up---------------------"); } }); } log.info("-------------Clean up complete---------------------"); } }
User defined caching refers to caching only for important data
@Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface SystemCacheDelete { }
@Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface SystemCacheable { }
@Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface SystemCachePut { }
section
@Component @Aspect public class SystemCacheAspect { /** * Configure pointcuts */ @Pointcut("@annotation(com.ibank.im.app.aop.cache.annotation.SystemCacheable)") public void systemCacheable() { // This method has no square method body, which is mainly used by other methods in the same category } @Pointcut("@annotation(com.ibank.im.app.aop.cache.annotation.SystemCachePut)") public void systemCachePut() { // This method has no square method body, which is mainly used by other methods in the same category } @Pointcut("@annotation(com.ibank.im.app.aop.cache.annotation.SystemCacheDelete)") public void systemCacheDelete() { // This method has no square method body, which is mainly used by other methods in the same category } @Around("systemCacheable()") public Object systemCacheableAround(ProceedingJoinPoint joinPoint) throws Throwable { Object[] args = joinPoint.getArgs(); ConcurrentHashMap<String, SystemCacheUtils.SystemCache> cachedMapByName = SystemCacheUtils.getCachedMapByName(Dict.SYSTEMDB_CACHE_KEY); if(cachedMapByName == null || cachedMapByName.size() ==0){ if(args == null || args.length ==0) { List<SystemDatabase> proceed = (List<SystemDatabase>) joinPoint.proceed(); proceed.forEach(db -> { SystemCacheUtils.set(Dict.SYSTEMDB_CACHE_KEY,String.valueOf(db.getSysdbId()),db); }); return proceed; }else { SystemDatabase sb = (SystemDatabase) joinPoint.proceed(args); if(sb != null) { SystemCacheUtils.set(Dict.SYSTEMDB_CACHE_KEY,String.valueOf(sb.getSysdbId()),sb); } return sb; } } else { if(args == null || args.length ==0) { return DBUtils.getSystemDatabaseByCache(); } else { SystemCacheUtils.SystemCache systemCache = cachedMapByName.get(String.valueOf(args[0])); if(systemCache == null || systemCache.getCache() == null) { SystemDatabase sb = (SystemDatabase) joinPoint.proceed(); if(sb != null) { SystemCacheUtils.set(Dict.SYSTEMDB_CACHE_KEY,String.valueOf(sb.getSysdbId()),sb); } return sb; } return systemCache.getCache(); } } } @Around("systemCachePut()") public void systemPutAround(ProceedingJoinPoint joinPoint) throws Throwable { SystemDatabase proceed = (SystemDatabase) joinPoint.getArgs()[0]; joinPoint.proceed(); SystemDatabase o = SystemCacheUtils.get(Dict.SYSTEMDB_CACHE_KEY, String.valueOf(proceed.getSysdbId())); if (o != null) { for (Field declaredField : o.getClass().getDeclaredFields()) { declaredField.setAccessible(true); Object o1 = declaredField.get(proceed); if (o1 != null) { declaredField.set(o,o1); } } SystemCacheUtils.set(Dict.SYSTEMDB_CACHE_KEY,String.valueOf(proceed.getSysdbId()),o); } else { SystemCacheUtils.set(Dict.SYSTEMDB_CACHE_KEY,String.valueOf(proceed.getSysdbId()),proceed); } } @Around("systemCacheDelete()") public void systemCacheUpdateAround(ProceedingJoinPoint joinPoint) throws Throwable { Integer proceed = (Integer) joinPoint.getArgs()[0]; joinPoint.proceed(); SystemCacheUtils.remove(Dict.SYSTEMDB_CACHE_KEY, String.valueOf(proceed)); } }
Finally, important data is cached at startup
@Slf4j @Component public class SystemDatabaseJobRunner implements ApplicationRunner { @Resource private SystemDataBaseMapper systemDataBaseMapper; @Autowired private TaskJobExcutorService taskJobExcutorService; @Override public void run(ApplicationArguments args) throws Exception { log.info("-------------------- Start injecting database---------------------"); List<SystemDatabase> systemDatabase = systemDataBaseMapper.getSystemDatabase(); if(systemDatabase == null || systemDatabase.size() ==0) { log.warn("------------------No database is configured. You need to add a database first---------------"); }else { systemDatabase.forEach(x -> { try { SystemDynamicDataSource.setDataSource(x.getSysdbId(), DBUtils.getDataSource(x.getSysdbAddr(),x.getSysdbUser(),x.getSysdbPwd())); }catch (Exception e){ log.error("Database injection failed:",e); } }); log.info("--------------------Database injection completed---------------------"); } List<SystemDatabase> systemDatabaseByCache = DBUtils.getSystemDatabaseByCache(1); if(systemDatabaseByCache.size() == 0) { log.warn("------------------The main database is not configured. You need to add the database first---------------"); } else { systemDatabaseByCache.forEach(x -> { taskJobExcutorService.refresh modify Cache(x); taskJobExcutorService.refresh modify Cache(x); }); } } }
3. Embedded database derby
The advantage of using derby database is that you don't need to install it. For some reasons, you must use embedded database.
maven reference
<dependency> <groupId>org.apache.derby</groupId> <artifactId>derby</artifactId> <version>10.13.1.1</version> </dependency>
Show some connection parameters
spring: datasource: druid: db-type: com.alibaba.druid.pool.DruidDataSource url: jdbc:derby:datas/imappdb;create=true driver-class-name: org.apache.derby.jdbc.EmbeddedDriver username: aaa password: aaaaaa # Number of initial connections initial-size: 5 # Minimum number of connections min-idle: 5 # maximum connection max-active: 10 # Get connection timeout max-wait: 5000 # Connection validity detection time time-between-eviction-runs-millis: 60000 # Minimum lifetime of a connection in the pool min-evictable-idle-time-millis: 300000 # Maximum lifetime of a connection in the pool max-evictable-idle-time-millis: 900000 test-while-idle: true test-on-borrow: false test-on-return: false # Check whether the connection is valid validation-query: values 1 query-timeout: 1
Some parameters are parsed. The url starts with jdbc:derby: followed by the database. I added a layer of folders in front of it,; The parameter create=true means that the data should be persistent, otherwise it only exists in memory. The database will be created automatically when the project starts. The user name and password are set by you. The parameter to check whether the connection is valid is values 1.
Take an example of creating a table
create table SystemDatabase( sysdbId INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1) primary key, sysdbName VARCHAR(50) not null, sysdbAddr VARCHAR(500) not null, sysdbUser VARCHAR(20) not null, sysdbPwd VARCHAR(50) not null, isMDb INTEGER not null default 0 --database u type );
In addition, when using mybatis to insert and echo the primary key value, the
<insert id="insertSystemDatabase" parameterType="com.ibank.im.app.entity.db.SystemDatabase"> <selectKey keyColumn="sysdbId" keyProperty="sysdbId" resultType="int" order="AFTER"> VALUES IDENTITY_VAL_LOCAL() </selectKey> insert into SystemDatabase(sysdbName,sysdbAddr,sysdbUser,sysdbPwd,isMDb) values(#{sysdbName},#{sysdbAddr},#{sysdbUser},#{sysdbPwd},#{isMDb}) </insert>
By the way, myabtis can use lazy loading. There is a select on the result node, which can be set as the id of another select. You don't need to write it on the Mapper interface
<resultMap id="resultIMDealScript" type="com.ibank.im.app.entity.imdeal.IMDealScript"> <id property="imSptId" column="imSptId"></id> <result property="imSptSQL" column="imSptSQL"></result> <result property="imSptParams" column="imSptParams"></result> <result property="imSptRelationId" column="imSptRelationId"></result> <result property="imSptRelationParams" column="imSptRelationParams"></result> <result property="imdlId" column="imdlId"></result> <association property="relationScript" column="imSptRelationId" select="getRelationScript"></association> <collection property="systemDatabaseList" column="imSptId" select="getRelationSystemDatabaseList"></collection> </resultMap>
4. Multi thread asynchronous query data, and each thread uses different data sources
This undertaking is 1. The most important thing is a configuration and a class
One configuration:
@Configuration @EnableAsync public class AsyncThreadTaskConfig { @Primary @Bean public ThreadPoolTaskExecutor taskExecutor() { ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); //Core thread pool size executor.setCorePoolSize(32); //Maximum number of threads executor.setMaxPoolSize(50); //Queue capacity executor.setQueueCapacity(200); //Active time executor.setKeepAliveSeconds(60); //Thread name prefix executor.setThreadNamePrefix("Asynchronous task-"); // setRejectedExecutionHandler: how to handle new tasks when the pool has reached max size // CallerRunsPolicy: the task is not executed in the new thread, but by the thread of the caller executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy()); executor.initialize(); return executor; } }
A class or asynchronous method mainly has an annotation @ Async. If the return value is not required, void will take effect. If there is a return value, the return value needs to be wrapped in future < >. The method is used when returning
return AsyncResult.forValue(remoteDBQueryMapper.queryRemoteQueryLog(params));
Get return value
Interface definition
Future<List<Map>> queryRemoteData(SystemDatabase systemDatabase, Map map);
5.aop obtains remote processing results and performs secondary processing
Why secondary processing? When connecting to Oracle, all results are defaulted (when querying data remotely, I don't know what fields and types, so I don't process them. Everything is defaulted). However, there are problems in the date field. When querying Oracle, mybatis uses oracle by default sql. There's no problem with timestamp. It's OK to show it. But I found that it's always impossible to serialize json when it's transmitted to the front end, so I carried out secondary processing. Moreover, the field also needs to be processed in lowercase, because subsequent data processing is involved
Notes:
@Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface ResultHandle { }
section
public class ResultHandlerAspect { /** * Configure pointcuts */ @Pointcut("@annotation(com.ibank.im.app.aop.lower.annotation.ResultHandle)") public void resultHandler() { // This method has no square method body, which is mainly used by other methods in the same category } @Around("resultHandler()") public List<Map<String,String>> systemCacheableAround(ProceedingJoinPoint joinPoint) throws Throwable { List<Map> result = (List<Map>) joinPoint.proceed(); List<Map<String,String>> list = new ArrayList<>(); for (Map map : result) { Map<String,String> data = new HashMap(); map.forEach((key, value) -> { if(value != null) { if(value instanceof oracle.sql.TIMESTAMP) { try { data.put(((String)key).toLowerCase(), ((oracle.sql.TIMESTAMP)value).timestampValue().toLocalDateTime().format(DateTimeFormatter.ISO_DATE_TIME)); } catch (SQLException e) { throw new SystemInvalidException("Date format conversion error",e); } } else if(value instanceof Timestamp) { data.put(((String)key).toLowerCase(), ((Timestamp)value).toLocalDateTime().format(DateTimeFormatter.ISO_DATE)); } else if(value instanceof BigDecimal) { data.put(((String)key).toLowerCase(), ((BigDecimal)value).toString()); } else { data.put(((String)key).toLowerCase(), String.valueOf(value)); } } }); list.add(data); } return list; } }
6. Server access IP white list
This can only be set at startup. For example, ip allow = "127.0.0.1|192.168.111.13|192.168.111.132"; deny = "192.168.111.132|192.168.111.135|192.168.111.134"; The design of Tomcat should be in the responsibility chain mode. The execution is from one Pipe to another. There is a valve in each Pipe, and RemoteAddrValve is the valve responsible for access
@Configuration public class TomcatConfig { @Value("${imserver.allowIP}") private String allowIP; @Value("${imserver.denyIP}") private String denyIP; @Value("${imserver.port}") private int serverPort; @Bean public TomcatServletWebServerFactory servletContainer(){ TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory (); tomcat.setPort(serverPort); if((!CommonUtils.isNullOrEmpty(allowIP) && !"*".equals(allowIP))|| (!CommonUtils.isNullOrEmpty(denyIP) && !"*".equals(denyIP))) { RemoteAddrValve remoteAddrValve = new RemoteAddrValve(); if(!CommonUtils.isNullOrEmpty(allowIP) && !"*".equals(allowIP)) { remoteAddrValve.setAllow(allowIP); } if(!CommonUtils.isNullOrEmpty(denyIP) && !"*".equals(denyIP)) { remoteAddrValve.setDeny(denyIP); } tomcat.addContextValves(remoteAddrValve); } return tomcat; } }
7. Unified exception handling
The core is just two annotations, @ RestControllerAdvice and @ ExceptionHandler
code
@RestControllerAdvice public class SystemExceptionHandler { @ExceptionHandler(Exception.class) @ResponseBody public ResponseEntity handleException(Exception e) throws Exception { if (e instanceof AccessDeniedException || e instanceof AuthenticationException) { throw e; } if(e instanceof SystemInvalidException){ return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(e.getMessage()); } return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body("System internal error"); } }
Why do you throw exceptions? This part needs to cooperate with the front-end for permission control, as follows.
8. Authority control
This part of the code is based on eladmin's code. Use jwt for authentication. You can go if you are interested eladmin Look at the source code
@Configuration @EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true) public class SystemSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired private JwtAuthenticationEntryPoint authenticationErrorHandler; @Autowired private JwtAccessDeniedHandler jwtAccessDeniedHandler; @Autowired private TokenConfigurer tokenConfigurer; @Bean GrantedAuthorityDefaults grantedAuthorityDefaults() { // Remove ROLE_ prefix return new GrantedAuthorityDefaults(""); } @Bean public PasswordEncoder passwordEncoder() { // Password encryption method return new BCryptPasswordEncoder(); } @Override protected void configure(HttpSecurity httpSecurity) throws Exception { httpSecurity // Disable CSRF .csrf().disable() // Authorization exception .exceptionHandling() .authenticationEntryPoint(authenticationErrorHandler) .accessDeniedHandler(jwtAccessDeniedHandler) // Prevent iframe from causing cross domain .and() .headers() .frameOptions() .disable() // Do not create session .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeRequests() // Static resources, etc .antMatchers( "/**/*.ico", "/**/*.svg", "/**/*.gif", "/**/*.html", "/**/*.css", "/**/*.js", "/**/*.png", "/**/*.jpg", "/**/*.woff", "/**/*.woff2", "/**/*.ttf", "/**/*.eot" ).permitAll() // Release OPTIONS request .antMatchers(HttpMethod.OPTIONS, "/**").permitAll() // Custom anonymous access all url release: allow anonymous and Token access, and refine to each Request type // GET .antMatchers("/auth/*","/imdeal/getimg").permitAll() // All requests require authentication .anyRequest().authenticated() .and().apply(tokenConfigurer); } }
@Component public class TokenConfigurer extends SecurityConfigurerAdapter<DefaultSecurityFilterChain, HttpSecurity> { @Autowired private TokenFilter customFilter; @Override public void configure(HttpSecurity http) { http.addFilterBefore(customFilter, UsernamePasswordAuthenticationFilter.class); } }
@Component public class JwtAccessDeniedHandler implements AccessDeniedHandler { @Override public void handle(HttpServletRequest request, HttpServletResponse response, AccessDeniedException accessDeniedException) throws IOException { //When a user accesses a protected REST resource without authorization, this method will be called to send a 403 Forbidden response response.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedException.getMessage()); } }
@Component public class JwtAuthenticationEntryPoint implements AuthenticationEntryPoint { @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException { // This method is called to send a 401 response when a user attempts to access a secure REST resource without providing any credentials response.sendError(HttpServletResponse.SC_UNAUTHORIZED, authException==null?"Unauthorized":authException.getMessage()); } }
@Component public class TokenFilter extends GenericFilterBean { private static final Logger log = LoggerFactory.getLogger(TokenFilter.class); @Autowired private TokenProvider tokenProvider; @Autowired private SystemSecurityProperties properties; @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { HttpServletRequest httpServletRequest = (HttpServletRequest) servletRequest; String token = SecurityUtils.resolveToken(httpServletRequest, properties); // If the Token is empty, you do not need to check Redis if (!CommonUtils.isNullOrEmpty(token)) { OnlineUser onlineUser = SystemCacheUtils.get(properties.getOnlineKey() + token); if (onlineUser != null && StringUtils.hasText(token)) { SecurityContextHolder.getContext().setAuthentication(new UsernamePasswordAuthenticationToken(onlineUser, token, onlineUser.getAuthorities())); // Token renewal tokenProvider.checkRenewal(token); } } filterChain.doFilter(servletRequest, servletResponse); } }
@Component public class TokenProvider implements InitializingBean { private final String AUTHORITIES_KEY = "system-auth"; private JwtParser jwtParser; private JwtBuilder jwtBuilder; @Autowired private SystemSecurityProperties properties; @Override public void afterPropertiesSet() { byte[] keyBytes = Decoders.BASE64.decode(properties.getBase64Secret()); Key key = Keys.hmacShaKeyFor(keyBytes); jwtParser = Jwts.parserBuilder() .setSigningKey(key) .build(); jwtBuilder = Jwts.builder() .signWith(key, SignatureAlgorithm.HS512); } /** * Create Token settings that never expire, * Token The time validity of is transferred to Redis maintenance * * @param authentication / * @return / */ public String createToken(Authentication authentication) { /* * Get permission list */ String authorities = authentication.getAuthorities().stream() .map(GrantedAuthority::getAuthority) .collect(Collectors.joining(",")); return jwtBuilder // Join the ID to ensure that the generated tokens are inconsistent .setId(CommonUtils.randomUUID()) .claim(AUTHORITIES_KEY, authorities) .setSubject(authentication.getName()) .compact(); } /** * @param token token to be checked */ public void checkRenewal(String token) { Object o = SystemCacheUtils.get(properties.getOnlineKey() + token); if (o !=null ){ SystemCacheUtils.renew(properties.getOnlineKey() + token, properties.getRenew()); } } }
@Service(value = "imsysel") public class IMSysElPermissionConfig { public Boolean check(String ...permissions){ final OnlineUser authentication = getCurrentUser(); if(authentication.getUser().getIsAdmin()) { return true; } Set<String> permissons = authentication.getAuthorities().stream().map(GrantedAuthority::getAuthority).collect(Collectors.toSet()); return Arrays.stream(permissions).anyMatch(permissons::contains); } /** * Get system user name * * @return System user name */ private OnlineUser getCurrentUser() { final Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); if (authentication == null) { throw new SystemInvalidException("Current login status expired"); } OnlineUser principal = (OnlineUser) authentication.getPrincipal(); if(principal != null) { return principal; } throw new SystemInvalidException("The currently logged in information cannot be found"); } }
Here's a little analysis. The @ EnableGlobalMethodSecurity method enables the method annotation. The global configuration still needs to inherit the WebSecurityConfigurerAdapter. The feeling of authorization exceptions is not very necessary. Unified exception handling feels that it can be handled, and this feeling can be optimized.
SecurityContextHolder.getContext().setAuthentication(new UsernamePasswordAuthenticationToken(onlineUser, token, onlineUser.getAuthorities()));
This code is used for authentication. All requests must be re authenticated.
IMSysElPermissionConfig is used to verify the permission on the method. Because there is already a UsernamePasswordAuthenticationToken, it is authenticated, but the permission needs to be verified. The core annotation is
@PreAuthorize("@imsysel.check('sysrole')")
The login code is as follows
@RestController @RequestMapping("/auth") public class AuthorizationController { @Autowired private SystemSecurityProperties properties; @Autowired private SystemUserAndRoleService systemUserAndRoleService; @Autowired private OnlineUserService onlineUserService; @Autowired private TokenProvider tokenProvider; @Autowired private AuthenticationManagerBuilder authenticationManagerBuilder; @GetMapping("/code") public ResponseEntity getCode() { return ResponseEntity.ok(ImageUtils.getBase64Images()); } @PostMapping("/login") public ResponseEntity login(@RequestBody AuthUserVo authUser, HttpServletRequest request) { // Password decryption String password = null; try { password = RsaUtils.decryptByPrivateKey(RsaProperties.privateKey, authUser.getUserPwd()); } catch (Exception e) { throw new SystemInvalidException("Password decryption exception", e); } // Query verification code String code = SystemCacheUtils.get(Dict.TOKEN_IMAGE,authUser.getUuid()); // Clear verification code SystemCacheUtils.remove(Dict.TOKEN_IMAGE,authUser.getUuid()); if (CommonUtils.isNullOrEmpty(code)) { throw new SystemInvalidException("The verification code does not exist or has expired"); } if (CommonUtils.isNullOrEmpty(authUser.getToken()) || !authUser.getToken().equalsIgnoreCase(code)) { throw new SystemInvalidException("Verification code error"); } UsernamePasswordAuthenticationToken authenticationToken = new UsernamePasswordAuthenticationToken(authUser.getUserId(), password); Authentication authentication = null; try { authentication = authenticationManagerBuilder.getObject().authenticate(authenticationToken); } catch (Exception e) { throw new SystemInvalidException("Wrong user name or password",e); } SecurityContextHolder.getContext().setAuthentication(authentication); // Generate token String token = tokenProvider.createToken(authentication); final OnlineUser onlineUser = (OnlineUser) authentication.getPrincipal(); // Save online information onlineUserService.save(onlineUser, token, request); onlineUserService.checkLoginOnUser(authUser.getUserId(), token); return ResponseEntity.ok(properties.getTokenStartWith() + token); } }
9. Front and rear end separation
The separation of front and rear ends is mainly vue+elementui. The project draws lessons from eladmin. There is nothing to say about the front end. It mainly encapsulates request:
import axios from 'axios' import router from '@/router' import { Notification } from 'element-ui' // import store from '../store' import { getToken, removeToken } from '@/utils/auth' import Config from '@/settings' // import Cookies from 'js-cookie' // Create an axios instance const service = axios.create({ baseURL: process.env.NODE_ENV === 'production' ? process.env.VUE_APP_BASE_API : '/', // api base_url timeout: Config.timeout // Request timeout }) // request interceptor service.interceptors.request.use( config => { if (getToken()) { config.headers.IMWebAuthorization = getToken() // Let each request carry a custom token. Please modify it according to the actual situation } config.headers['Content-Type'] = 'application/json' return config }, error => { Promise.reject(error) } ) // response interceptor service.interceptors.response.use( response => { if (response.data instanceof Blob) { return response } return response.data }, error => { if (error.toString().indexOf('Error: timeout') !== -1) { Notification.error({ title: 'Network request timeout', duration: 300000 }) return Promise.reject(error) } // json prompt of compatible blob download error if (error.response.data instanceof Blob) { const reader = new FileReader() reader.readAsText(error.response.data, 'utf-8') reader.onload = function (e) { const errorMsg = reader.result Notification.error({ title: errorMsg, duration: 3000 }) } } else { let code = 0 try { code = error.response.status } catch (e) { if (error.toString().indexOf('Error: timeout') !== -1) { Notification.error({ title: 'Network request timeout', duration: 3000 }) return Promise.reject(error) } } if (code) { if (code === 401) { Notification.warning({ title: 'login timeout ', duration: 3000, onClose: () => { removeToken() location.reload() } }) } else if (code === 403) { router.push({ path: '/401' }) } else if (code === 500) { const errorMsg = error.response.data if (errorMsg !== undefined) { Notification.error({ title: errorMsg, duration: 3000 }) } else { Notification.error({ title: 'unknown error', duration: 3000 }) } } else { const errorMsg = error.response.data.message if (errorMsg !== undefined) { Notification.error({ title: errorMsg, duration: 5000 }) } else { Notification.error({ title: 'unknown error', duration: 3000 }) } } } else { Notification.error({ title: 'Remote Server Disconnected', duration: 3000 }) } } return Promise.reject(error) } ) export default service
Package file download
export default function filedownload (res) { const filename = res.headers['content-disposition'] const blob = new Blob([res.data]) var downloadElement = document.createElement('a') var href = window.URL.createObjectURL(blob) downloadElement.href = href let finalfilename = filename.split('filename=')[1] if (finalfilename.startsWith('"')) { finalfilename = finalfilename.substring(1) } if (finalfilename.endsWith('"')) { finalfilename = finalfilename.substring(0, finalfilename.length - 1) } downloadElement.download = decodeURIComponent(finalfilename) document.body.appendChild(downloadElement) downloadElement.click() document.body.removeChild(downloadElement) window.URL.revokeObjectURL(href) }
axios file download request
request({ url: '/querylog/downloadlog', method: 'post', responseType: 'blob', data: this.logForm }).then(res => { filedownload(res) this.downloading = false this.$message({ message: 'query was successful', type: 'success' }) }).catch(res => { this.downloading = false })
And upload and download requests
const param = new FormData() // Create form object param.append('file', params.file) // Add data to the form object through append param.append('yyzcSumId', this.singleGenFormData.genCurrentYYZCSumId) const config = { headers: { 'Content-Type': 'multipart/form-data' } } request({ url: '/yyzc/batchGen', method: 'post', responseType: 'blob', data: param, config }).then(res => { this.$message({ message: 'Generated successfully', type: 'success' }) filedownload(res) this.batchGenVisble = false this.$refs.batchupload.clearFiles() })
When writing code, I found a very strange problem, that is, in the case of two-way binding, when replacing the value, I directly replaced the array with = without changing. Maybe the monitoring added at that time was still in progress and did not go deep into it, but when replacing with splice, the data can be two-way bound.
Very important: when binding the front end in two directions, you must page or lazy load the data, otherwise tens of thousands of data can directly ruin the browser when creating dom
10.excel export
This is a reference blog https://blog.csdn.net/haibo_bear/article/details/89921608.
Resolution:
InputStream inputStream = file.getInputStream(); Workbook wb = null; if (file.getOriginalFilename().endsWith(".xls") ||file.getOriginalFilename().endsWith(".XLS")) { wb = new HSSFWorkbook(inputStream); } else if (file.getOriginalFilename().endsWith(".xlsx") ||file.getOriginalFilename().endsWith(".XLSX")) { wb = new XSSFWorkbook(inputStream); } else { } Sheet sheet = wb.getSheetAt(0); // Get the total number of rows int rowNum = sheet.getLastRowNum(); if(rowNum < 1 ) { throw new SystemInvalidException("Missing parsing data"); } Row row = sheet.getRow(0); int colNum = row.getPhysicalNumberOfCells(); Map<Integer,String> paramMapIndex = new HashMap<>(); for (int i = 0; i < colNum; i++) { String cellFormatValue = getCellFormatValue(row.getCell(i)).trim(); if(paramSet.contains(cellFormatValue)) { paramMapIndex.put(i,cellFormatValue); } } if(paramMapIndex.size() != paramSet.size()) { throw new SystemInvalidException("Missing parameter"); } Map<String,List<String>> headerMapData = new HashMap<>(); // The text should start from the second line for (int i = 1; i <= rowNum; i++) { row = sheet.getRow(i); int j = 0; // Map<Integer, String> cellValue = new HashMap<Integer, Object>(); int countEmptyCell = 0; while (j < colNum) { String obj = getCellFormatValue(row.getCell(j)); List<String> list = headerMapData.get(paramMapIndex.get(i)); if(list == null) { list = new ArrayList<>(); } if("".equals(obj.trim())){ countEmptyCell++; }else{ list.add(obj.trim()); headerMapData.put(paramMapIndex.get(j), list); } j++; } //It's all empty lines. It's over if(countEmptyCell == colNum) { break; } }
Build:
public byte[] downLoadExcelWithRjnl(QueryData queryData) { XSSFWorkbook workbook = new XSSFWorkbook(); XSSFSheet sheet = workbook.createSheet("select mcjnlquerylog"); //Add new data row and set cell data int rowNum = 1; String[] headers = {modify}; //headers represents the header of the first row in the excel table XSSFRow row = sheet.createRow(0); //Add header in excel table for(int i=0;i<headers.length;i++){ XSSFCell cell = row.createCell(i); XSSFRichTextString text = new XSSFRichTextString(headers[i]); cell.setCellValue(text); } //Store the queried data in the table and put it into the corresponding column for (Map map : queryData.getQueryList()) { XSSFRow row1 = sheet.createRow(rowNum); row1.createCell(0).setCellValue((String) map.get(modify)); List<Map> rjnl = (List<Map>) map.get(Dict.RJNL_RESULT_KEY); if(rjnl != null && rjnl.size() > 0) { int currentRownum = rowNum; for (Map map1 : rjnl) { XSSFRow row2 = null; if(currentRownum < rowNum) { row2 = sheet.createRow(rowNum); } else { row2 = row1; } row2.createCell(38).setCellValue((String) map1.get(modify)); rowNum++; } for (int i = 0; i < 38; i++) { CellRangeAddress region = new CellRangeAddress(currentRownum, rowNum-1, i, i); sheet.addMergedRegion(region);//merge cell } } else { rowNum++; } } if(!CommonUtils.isNullOrEmpty(queryData.getErrormsg())) { XSSFRow errwrrow = sheet.createRow(rowNum); XSSFCell cell = errwrrow.createCell(0); cell.setCellValue(queryData.getErrormsg()); CellRangeAddress region = new CellRangeAddress(rowNum, rowNum, 0, 48); sheet.addMergedRegion(region); } ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream(); try { workbook.write(byteArrayOutputStream); return byteArrayOutputStream.toByteArray(); } catch (IOException e) { throw new SystemInvalidException("File write error",e); }finally { try { workbook.close(); } catch (IOException e) { throw new SystemInvalidException("File stream close failed",e); } } }
11. Some experience of file download
There are two ways to download files: using response output stream or responseentity < byte [] >
The first is to get the stream from the response for writing, and then set the response header
public void singleGenYYZC(@RequestBody Map<String,String> requestMap, HttpServletResponse response){ YYZCSummary yyzcSummary = yyzcService.getYYZCSummaryFromRequestMap(requestMap); OutputStream outputStream = null; try { response.setHeader("Content-Disposition", "attachment;filename=" + URLEncoder.encode(yyzcSummary.getYyzcSumName()+"_"+System.currentTimeMillis()+".zip", "UTF-8")); // Code is required, otherwise Chinese is garbled response.setContentType("application/zip;charset=utf-8"); response.setCharacterEncoding("UTF-8"); outputStream = response.getOutputStream(); yyzcService.singleGenYYZC(requestMap,outputStream,yyzcSummary); } catch (IOException e) { e.printStackTrace(); }finally { if(outputStream!=null){ try { outputStream.close(); } catch (IOException e) { e.printStackTrace(); } } } }
Personal preference is the second. Use ByteArrayOutputStream to get byte after writing data.
public ResponseEntity<byte[]> batchGenYYZC(MultipartFile file, String yyzcSumId){ YYZCSummary yyzcSummary = yyzcService.getYYZCSummaryFromYYZCSumId(yyzcSumId); byte[] bytes = yyzcService.batchGenYYZC(file,yyzcSummary); try { HttpHeaders header = new HttpHeaders(); header.add("Content-Disposition","attachment;filename=" + URLEncoder.encode(yyzcSummary.getYyzcSumName()+"_"+System.currentTimeMillis()+".zip" ,"UTF-8")); header.setContentType(MediaType.valueOf("application/zip;charset=utf-8")); return new ResponseEntity<byte[]>(bytes,header, HttpStatus.CREATED); } catch (UnsupportedEncodingException e) { throw new SystemInvalidException("File name encoding failed",e); } }
Attach a section of code to generate a zip compressed file
private byte[] generateZipYYZCFile(List<YYZCData> yyzcDataList, Map<String, List<String>> headerMapData, String yyzcSumSteps) { ZipOutputStream zipOutputStream = null; ByteArrayOutputStream byteArrayOutputStream = null; checkEverySQLParamCount(yyzcDataList,headerMapData); try { Map<String, Map<Integer, List<YYZCData>>> dbMapTypeMapData = yyzcDataList.stream().collect(Collectors.groupingBy(YYZCData::getYyzcDtDb, Collectors.groupingBy(YYZCData::getYyzcDtType))); byteArrayOutputStream = new ByteArrayOutputStream(); zipOutputStream = new ZipOutputStream(byteArrayOutputStream); for (Map.Entry<String, Map<Integer, List<YYZCData>>> mapTypeMapDataEntry : dbMapTypeMapData.entrySet()) { Map<Integer, List<YYZCData>> mapDataEntryData = mapTypeMapDataEntry.getValue(); String dbName = mapTypeMapDataEntry.getKey(); for (Map.Entry<Integer, List<YYZCData>> dataListEntry : mapDataEntryData.entrySet()) { Integer dbType = dataListEntry.getKey(); ZipEntry zipEntry = new ZipEntry(dbName + intMapType.get(dbType) + ".sql"); zipOutputStream.putNextEntry(zipEntry); List<YYZCData> sqlList = dataListEntry.getValue(); sqlList.sort((x, y) -> x.getYyzcDtOrder() - y.getYyzcDtOrder()); for (YYZCData yyzcData : sqlList) { String yyzcDtParams = yyzcData.getYyzcDtParams(); String yyzcDtSql = yyzcData.getYyzcDtSql(); if (StringUtils.isEmpty(yyzcDtParams)) { zipOutputStream.write(yyzcDtSql.getBytes()); continue; } String[] splitParams = yyzcDtParams.split(","); int size = headerMapData.get(splitParams[0]).size(); for (int i = 0; i < size; i++) { String tmp = yyzcDtSql; for (String param : splitParams) { String data = headerMapData.get(param).get(i); tmp = tmp.replaceAll("#\\{" + param + "\\}", data); } tmp = tmp + "\r\n"; zipOutputStream.write(tmp.getBytes("UTF-8")); } zipOutputStream.write("\r\n".getBytes("UTF-8")); } zipOutputStream.closeEntry(); } } if (!StringUtils.isEmpty(yyzcSumSteps)) { ZipEntry zipEntry = new ZipEntry("file.txt"); zipOutputStream.putNextEntry(zipEntry); zipOutputStream.write(yyzcSumSteps.getBytes()); zipOutputStream.closeEntry(); } zipOutputStream.flush(); zipOutputStream.finish();// bug. Without this byte, there will be less data byteArrayOutputStream.flush(); return byteArrayOutputStream.toByteArray(); } catch (IOException e) { throw new SystemInvalidException("File generation failed",e); }finally { if (byteArrayOutputStream != null) { try { byteArrayOutputStream.close(); } catch (IOException e) { throw new SystemInvalidException("File close failed",e); } } if (zipOutputStream != null) { try { zipOutputStream.close(); } catch (IOException e) { throw new SystemInvalidException("File close failed",e); } } } }
Mainly
new ZipOutputStream(byteArrayOutputStream);
Include this binary stream, compress each file in the file, that is, a ZipEntry, put, write data, close, and finally be sure to add this
zipOutputStream.finish();
Otherwise, the file generated at the front end cannot be opened at all, and the stream is not written.
Finally, we solved a bug, that is, when the file name is passed into the front end, if the file name is Chinese, it will be garbled, because we need to encode the file name in the background
URLEncoder.encode(yyzcSummary.getYyzcSumName()+"_"+System.currentTimeMillis()+".zip" ,"UTF-8")
You can download the file. See 9. Front and rear end separation
At the end of the article, because you are the only one from the requirements to the front end to the back end to the test, there are actually many bug s. There is code on gitee, but it is not open for the time being.
Only when I write the project, I find that I can't do anything. The technology is too low. It's purely for Baidu development. It's too far away. I have to step up my study in the future.