abstract
Idle boring, so I wrote this cool article, which Mina sang can use directly, unless there are almost no bug s, when I didn't say (doge)
QA: why don't you rely on the operation encapsulated by springboot?
Ou believes that springboot over encapsulates the operation class, and it is OK to realize ordinary and simple operations, but it is difficult to use when involving more complex operations. In particular, the APIs launched by different versions of springboot change frequently, which is more difficult to use. The api update officially launched by es will not make the operation class change too frequently. Personally, I feel that the operation of spboot is not as flexible and powerful as the api officially launched by es, The error reporting provided by springboot is difficult to figure out and meet the requirements encountered in the previous work, so the official api is used
elasticsearch version: 7.4
Installation operation documents: https://blog.csdn.net/UnicornRe/article/details/121747039?spm=1001.2014.3001.5501
rely on
It is better to keep the dependency consistent with the es version. If the following dependency reports an error, add it next to the Maven < parent > peer label
<properties> <java.version>1.8</java.version> <!-- <spring-cloud.version>2020.0.2</spring-cloud.version> --> <!--Resolve version issues--> <elasticsearch.version>7.4.0</elasticsearch.version> </properties>
<!--elasticsearch--> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>7.4.0</version> </dependency> <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>7.4.0</version> </dependency>
yml configuration
You can modify the configuration and code to add multiple es machines, with address es separated by commas
elasticsearch: schema: http address: 192.168.52.43:9200 connectTimeout: 5000 socketTimeout: 5000 connectionRequestTimeout: 5000 maxConnectNum: 100 maxConnectPerRoute: 100
Connection configuration
import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestClientBuilder; import org.elasticsearch.client.RestHighLevelClient; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Scope; import java.time.Duration; import java.util.ArrayList; import java.util.List; @Configuration public class EsHighLevalConfigure { //agreement @Value("${elasticsearch.schema:http}") private String schema="http"; // Cluster address. If there are multiple addresses, separate them with "," @Value("${elasticsearch.address}") private String address; // Connection timeout @Value("${elasticsearch.connectTimeout:5000}") private int connectTimeout; // Socket connection timeout @Value("${elasticsearch.socketTimeout:10000}") private int socketTimeout; // Gets the timeout for the connection @Value("${elasticsearch.connectionRequestTimeout:5000}") private int connectionRequestTimeout; // maximum connection @Value("${elasticsearch.maxConnectNum:100}") private int maxConnectNum; // Maximum number of routing connections @Value("${elasticsearch.maxConnectPerRoute:100}") private int maxConnectPerRoute; @Bean public static RestHighLevelClient restHighLevelClient() { List<HttpHost> hostLists = new ArrayList<>(); String[] hostList = address.split(","); for (String addr : hostList) { String host = addr.split(":")[0]; String port = addr.split(":")[1]; hostLists.add(new HttpHost(host, Integer.parseInt(port), schema)); } HttpHost[] httpHost = hostLists.toArray(new HttpHost[]{}); // Building connection objects RestClientBuilder builder = RestClient.builder(httpHost); // Asynchronous connection delay configuration builder.setRequestConfigCallback(requestConfigBuilder -> { requestConfigBuilder.setConnectTimeout(connectTimeout); requestConfigBuilder.setSocketTimeout(socketTimeout); requestConfigBuilder.setConnectionRequestTimeout(connectionRequestTimeout); return requestConfigBuilder; }); // Configuration of asynchronous connections builder.setHttpClientConfigCallback(httpClientBuilder -> { httpClientBuilder.setMaxConnTotal(maxConnectNum); httpClientBuilder.setMaxConnPerRoute(maxConnectPerRoute); httpClientBuilder.setKeepAliveStrategy((response, context) -> Duration.ofMinutes(5).toMillis()); return httpClientBuilder; }); return new RestHighLevelClient(builder); } }
index structure
Although the index structure is certainly not the same as yours, the code structure does not need to hurt the meridians and bones,
Let me briefly talk about this structure. An intellectual property information contains n document annex es and n (applicant inventor) applicants,
So if you use "type": "nested" nested type, you can learn by yourself if you don't know the difference between "type": "object". I won't say more here.
If you want to learn some optimization, installation, data migration and cold backup, you can take a look at my article: (there are too many things, and some of them are not written) https://blog.csdn.net/UnicornRe/article/details/121747039?spm=1001.2014.3001.5501
PUT /intellectual { "settings": { "number_of_shards": 1, "number_of_replicas": 1 } } PUT /intellectual/_mapping { "properties": { "id": { "type": "long" }, "name": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "type": { "type": "keyword" }, "keycode": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "officeId": { "type": "keyword" }, "officeName": { "type": "keyword" }, "titular": { "type": "keyword" }, "applyTime": { "type": "long" }, "endTime": { "type": "long" }, "status": { "type": "keyword" }, "agentName": { "type": "text", "analyzer": "ik_smart", "search_analyzer": "ik_smart" }, "annex": { "type": "nested", "properties": { "id": { "type": "long" }, "name": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_max_word" }, "createTime": { "type": "long" } } }, "applicant": { "type": "nested", "properties": { "id": { "type": "long" }, "applicantId": { "type": "long" }, "isOffice": { "type": "integer" }, "userName": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "outUsername": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" } } } } }
CRUD of common non nested structure
Regardless of "type": "nested" nested objects, only ordinary fields are operated
I first define an entity class, and the IntellectualEntity field is consistent with the mapping above
All operations are injected with RestHighLevelClient restHighLevelClient
newly added
public void insertIntel(IntellectualEntity intellectualEntity) throws IOException { //intellectual is the index name IndexRequest indexRequest = new IndexRequest("intellectual") .source(JSON.toJSONString(intellectualEntity), XContentType.JSON) .setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) .id(intellectualEntity.getId()+"");//Manually specify the es document id IndexResponse out = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT); log.info("Status:{}", out.status()); }
Update (update by id)
Only the fields whose entity is not empty will be updated, just like the update provided by mybatisplus by default
Because the es document id must be unique, the method can only update one at most
public void updateIntel(IntellectualEntity entity) throws IOException { //Update the document according to the id of the IntellectualEntity UpdateRequest updateRequest = new UpdateRequest("intellectual", entity.getId()+""); byte[] json = JSON.toJSONBytes(entity); updateRequest.doc(json, XContentType.JSON); UpdateResponse response = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT); log.info("Status:{}", response.status()); }
Update (advanced, update according to search criteria, using painless painless script)
The painless script is applicable to many complex business situations, such as updating the value field as the field in the map
private void updateByQuery(IntellectualEntity entity) throws IOException { UpdateByQueryRequest updateByQueryRequest = new UpdateByQueryRequest(); updateByQueryRequest.indices("intellectual"); //The search condition is id (because the id of doc specified during insertion is consistent with the id of entity class, which ensures the uniqueness of search results) //If the search criteria find many results, use them with caution updateByQueryRequest.setQuery(new TermQueryBuilder("id", entity.getId())); //map stores script entity parameter values Map<String,Object> map=new HashMap<>(); map.put("intelName", entity.getName()); map.put("intelStatus", entity.getStatus()); map.put("intelApplyTime", entity.getApplyTime()); map.put("intelKeyCode", entity.getKeycode()); map.put("intelEndTime", entity.getEndTime()); map.put("intelType", entity.getType()); map.put("intelTitular", entity.getTitular()); //Specify which fields need to be updated, CTX_ source. XXX is the field of es, which is updated with the value assignment of map updateByQueryRequest.setScript(new Script(ScriptType.INLINE, "painless", "ctx._source.intelName=params.intelName;" + "ctx._source.intelStatus=params.intelStatus;"+ "ctx._source.intelApplyTime=params.intelApplyTime;"+ "ctx._source.intelKeyCode=params.intelKeyCode;"+ "ctx._source.intelType=params.intelType;"+ "ctx._source.intelTitular=params.intelTitular;" , map)); BulkByScrollResponse bulkByScrollResponse = restHighLevelClient.updateByQuery(updateByQueryRequest, RequestOptions.DEFAULT); log.info("Creation status:{}", bulkByScrollResponse.getStatus()); }
delete
public void deleteIntel(IntellectualEntity entity) throws IOException { DeleteRequest deleteRequest=new DeleteRequest("intellectual",entity.getId()+""); DeleteResponse deleteResponse = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT); log.info("Status:{}", deleteResponse.status()); }
Delete (delete according to search criteria)
Similar to the operation of updating search criteria, it is replaced by deleting DeleteRequest by DeleteByQueryRequest,Believe in the wit, you already know
Search highlight (normal highlight, space multi condition search)
This code does not involve nested highlighting of nested fields for the time being
When setting conditions, should=or, must=and
Steps: set highlight constructor - > search out results - > replace highlighted data with non highlighted data - > return results
Write a highlight constructor first
Highlight constructor:
private static void HighlightBuilder highlightBuilder; static { highlightBuilder = new HighlightBuilder(); highlightBuilder.numOfFragments(0);//Gets the highlighted clip from the first tile highlightBuilder.preTags("<font color='#E75213 '> "); / / custom highlight label highlightBuilder.postTags("</font>"); highlightBuilder.highlighterType("unified");//Highlight type highlightBuilder .field("name")//Attribute values that need to be highlighted .field("keycode") ; highlightBuilder.requireFieldMatch(false);//Multiple fields highlighted }
Search steps:
public List<Map<String,Object>> queryByContent(String content,Integer pageCurrent, Date startTimeApply,Date endTimeApply,Date startTimeEnd,Date endTimeEnd ) throws IOException { //The space is divided into multiple conditions. This search supports the separation of multiple search terms, and the search relationship of multiple terms is separated by and String[] manyStr = content.split("\\s+"); //Define a list < Map > as the return result List<Map<String,Object>> list = new LinkedList<>(); //First, construct the condition constructor BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery(); if(manyStr.length>1){ for (int i=0;i<manyStr.length;i++){ BoolQueryBuilder innerBoolQueryBuilder = QueryBuilders.boolQuery(); //nestedQuery, nested search criteria innerBoolQueryBuilder.should(QueryBuilders.nestedQuery("annex",QueryBuilders.matchQuery("annex.content", manyStr[i]) , ScoreMode.Max).boost(2)); innerBoolQueryBuilder.should(QueryBuilders.nestedQuery("annex",QueryBuilders.matchQuery("annex.simpleContent", manyStr[i]) , ScoreMode.Max).boost(2)); innerBoolQueryBuilder.should(QueryBuilders.nestedQuery("applicant",QueryBuilders.matchQuery("applicant.userName", manyStr[i]).prefixLength(2).maxExpansions(4).boost(5) , ScoreMode.Max)); innerBoolQueryBuilder.should(QueryBuilders.nestedQuery("applicant",QueryBuilders.matchQuery("applicant.outUsername", manyStr[i]).prefixLength(2).maxExpansions(4).boost(5) , ScoreMode.Max)); innerBoolQueryBuilder.should(QueryBuilders.matchQuery("name", manyStr[i]).boost(8)); innerBoolQueryBuilder.should(QueryBuilders.termsQuery("officeName", manyStr[i]).boost(100)); innerBoolQueryBuilder.should(QueryBuilders.fuzzyQuery("keycode", manyStr[i]).boost(5)); innerBoolQueryBuilder.should(QueryBuilders.matchQuery("agentName", manyStr[i]).boost(5)); innerBoolQueryBuilder.should(QueryBuilders.termsQuery("status", manyStr[i]).boost(30)); //and relation boolQueryBuilder.must(innerBoolQueryBuilder);// } } else { //No spaces boolQueryBuilder.should(QueryBuilders.nestedQuery("annex",QueryBuilders.matchQuery("annex.content", content) , ScoreMode.Max).boost(2)); boolQueryBuilder.should(QueryBuilders.nestedQuery("annex",QueryBuilders.matchQuery("annex.simpleContent", content) , ScoreMode.Max).boost(2)); //Nested highlighting is not needed for the time being innerHit(new InnerHitBuilder().setHighlightBuilder(highlightBuilder) boolQueryBuilder.should(QueryBuilders.nestedQuery("applicant",QueryBuilders.matchQuery("applicant.userName", content).prefixLength(2).maxExpansions(4).boost(5) , ScoreMode.Max)); boolQueryBuilder.should(QueryBuilders.nestedQuery("applicant",QueryBuilders.matchQuery("applicant.outUsername", content).prefixLength(2).maxExpansions(4).boost(5) , ScoreMode.Max)); boolQueryBuilder.should(QueryBuilders.matchQuery("name", content).boost(8)); boolQueryBuilder.should(QueryBuilders.termsQuery("officeName", content).boost(100)); boolQueryBuilder.should(QueryBuilders.fuzzyQuery("keycode", content).boost(5)); boolQueryBuilder.should(QueryBuilders.matchQuery("agentName", content).boost(5)); boolQueryBuilder.should(QueryBuilders.termsQuery("status", content).boost(30)); } if(startTimeApply!=null){ //The filter does not participate in the scoring, and the search score will not be high due to the search time conditions, resulting in inaccurate sorting boolQueryBuilder.filter(QueryBuilders.rangeQuery("applyTime").gte(startTimeApply.getTime())); } if(endTimeApply!=null){ boolQueryBuilder.filter(QueryBuilders.rangeQuery("applyTime").lte(endTimeApply.getTime())); } if(startTimeEnd!=null){ boolQueryBuilder.filter(QueryBuilders.rangeQuery("endTime").gte(startTimeEnd.getTime())); } if(endTimeEnd!=null){ boolQueryBuilder.filter(QueryBuilders.rangeQuery("endTime").lte(endTimeEnd.getTime())); } //New request SearchRequest searchRequest = new SearchRequest("intellectual"); //New search Configurator SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); //Search configurator - > highlight Configurator searchSourceBuilder.highlighter(highlightBuilder); //Search configurator - > configure combination conditions //Explanation: minScore is the minimum score of search matching items. Items below this score are not included in the search results, and the score is affected by the search conditions The impact of boost, //The larger the boost value, the greater the calculation score. If the search results are not satisfactory, you can test and adjust the boost value to achieve satisfactory results. Another scheme is to customize the calculation score formula, which belongs to the expert level scheme //Pagination explanation: this kind of from size pagination may lead to search crash when the number of pages is too large (because the number of summary data of search results is too large, which requires a large jvm memory. I'm too lazy to talk about the principle and can't write it down. Minasan is interested in learning by herself), //The solution is to use deep paging. Of course, the es machine from size of single machine and single partition will not lead to search crash searchSourceBuilder .minScore(9)//Set minimum score .query(boolQueryBuilder)//Load search criteria .from((pageCurrent-1)*10)//Starting number, from 0 .size(10)//Display records per page ; //Load search Configurator searchRequest.source(searchSourceBuilder); //Search returns results SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT); //The score of testable search, combined with adjusting the boot value, can make the search results more "satisfactory" log.info("Total number"+search.getHits().getTotalHits().value); log.info("Maximum score of qualified documents: "+search.getHits().getMaxScore()); //Traverse the list of search results for(SearchHit documentFields : search.getHits().getHits()){ //sourceAsMap is a result that does not contain highlighting. If the search does not require highlighting, the result will be returned directly Map<String, Object> sourceAsMap = documentFields.getSourceAsMap(); //highlightFieldsMap is the result of highlighting Map<String, HighlightField> highlightFieldsMap = documentFields.getHighlightFields(); //Use the getHighLightMap method to replace the original non highlighted field result with the highlighted result sourceAsMap = changeHighLightMap(sourceAsMap, highlightFieldsMap); //Because the time of es storage is set to the long timestamp type, it needs to be converted sourceAsMap.put("applyTime", new Date(Long.parseLong(sourceAsMap.get("applyTime")+""))); if(sourceAsMap.get("endTime")!=null){ sourceAsMap.put("endTime",new Date(Long.parseLong(sourceAsMap.get("endTime")+""))); } //Print score log.info(documentFields.getScore()); //Save to list list.add(sourceAsMap); } return list; }
changeHighLightMap method, where the highlighted field is temporarily removed
private Map<String, Object> changeHighLightMap(Map<String, Object> map, Map<String, HighlightField> highlightFields) { //Get the highlighted attribute value from the highlight result, because a piece of data has multiple attribute values, and the highlight setter can also set multiple attribute values, //Some attribute value search hits may be stored in highlightFields, and some attribute value search misses will not be stored in highlightFields, //So judge= When null, the attribute value of this data is considered to be hit HighlightField highlightName = highlightFields.get("name"); HighlightField highlightKC = highlightFields.get("keycode"); if (highlightName != null) { //You can see that fragments() itself is an array. If it is nested data, the length of fragments() array may be large, //However, the highlighted type data selected here does not have the nested type. All of them are either highlightName=null or fragments length = 1 //Replace the highlighted data to the non highlighted result set map.put("name", highlightName.fragments()[0].string()); } if (highlightKC != null) { map.put("keycode", highlightKC.fragments()[0].string()); } }
nested type advanced painless CRUD
nested insert
The above piece of intellectual property information contains n document annex es and n (applicant inventor) applicants. Both attribute types are nested and belong to the list
First write a formatting tool:
private static com.fasterxml.jackson.databind.ObjectMapper mapper = new com.fasterxml.jackson.databind.ObjectMapper(); //Format parameters private static Map<String, Object> convertValueToMap(Object data) { return mapper.convertValue(data, new com.fasterxml.jackson.core.type.TypeReference<Map<String, Object>>() {}); }
If: this intellectual property data is available, the user uploads a pdf document. After reading the content of the pdf document, we need to store the pdf information in the annex list of the corresponding intellectual property
public void addAnnex(IntellectualEntity entity,AnnexEntity annexEntity) throws IOException { UpdateRequest updateRequest=new UpdateRequest("intellectual",entity.getId()+""); Map<String, Object> param = new HashMap<>(); //Format parameters param.put("data",convertValueToMap(annexEntity)); //ctx._source is fixed StringBuffer sc = new StringBuffer("ctx._source.annex.add(params.data);"); Script script = new Script(INLINE, Script.DEFAULT_SCRIPT_LANG, sc.toString(), param); updateRequest.script(script); //Update by script. After the first insertion, the script will also be executed. If there is no data, insert the content of upsert //updateRequest.scriptedUpsert(true); //This must be available, otherwise there will be an error if there is no data //updateRequest.upsert(param); UpdateResponse response = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT); }
nested deletion
At this time, the user needs to delete an annex document belonging to an intellectual property right
public void deleteAnnex(String intelId,Integer annexId) throws IOException { UpdateRequest updateRequest=new UpdateRequest("intellectual",intelId); Map<String, Object> param = new HashMap<>(); param.put("id",annexId);//Invalid string StringBuffer sc = new StringBuffer("ctx._source.annex.removeIf(item -> item.id == params.id)"); Script script = new Script(INLINE, Script.DEFAULT_SCRIPT_LANG, sc.toString(), param); updateRequest.script(script); UpdateResponse response = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT); }
nested update
The user modifies an annex document belonging to an intellectual property right
public void addAnnex(IntellectualEntity entity,AnnexEntity annexEntity) throws IOException { UpdateRequest updateRequest=new UpdateRequest("intellectual",entity.getId()+""); Map<String, Object> param = new LinkedHashMap<>(); //Format parameters param.put("data",convertValueToMap(annexEntity)); //ctx._source is fixed StringBuffer sc = new StringBuffer( "int i = 0;for(LinkedHashMap item:ctx._source.annex){if(item.id == params.data.id){ctx._source.annex[i] = params.data;}i++;}" ); Script script = new Script(INLINE, Script.DEFAULT_SCRIPT_LANG, sc.toString(), param); updateRequest.script(script); UpdateResponse response = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT); }
nested search results highlight
Similarly, write a highlight configurator first
It is better not to add multiple highlighted attributes in the ordinary attribute configuration
private static HighlightBuilder highlightBuilder2; static { highlightBuilder2 = new HighlightBuilder(); highlightBuilder2.numOfFragments(0); highlightBuilder2.preTags("<font color='#e75213'>"); highlightBuilder2.postTags("</font>"); highlightBuilder2.highlighterType("unified"); highlightBuilder2 .field("annex.content") .field("annex.simpleContent") .field("applicant.userName") .field("applicant.outUsername") ; highlightBuilder2.requireFieldMatch(false); }
Search code nestedQuery plus
.innerHit(new InnerHitBuilder().setHighlightBuilder(highlightBuilder2))
boolQueryBuilder.should(QueryBuilders.nestedQuery("annex",QueryBuilders.matchQuery("annex.content", "text") , ScoreMode.Max).boost(2).innerHit(new InnerHitBuilder().setHighlightBuilder(highlightBuilder2)));
It's still normal highlight code. Make changes
for(SearchHit documentFields : search.getHits().getHits()){ //sourceAsMap is a result that does not contain highlights Map<String, Object> sourceAsMap = documentFields.getSourceAsMap(); //Get nested hit data Map<String, SearchHits> innerHits = documentFields.getInnerHits(); //Reference passed parameter, replace nested highlight changeNestedHighLightMap(innerHits,sourceAsMap); //highlightFieldsMap is the result of highlighting. Get normal highlighting Map<String, HighlightField> highlightFieldsMap = documentFields.getHighlightFields(); //The normal highlight is replaced by the getHighLightMap method sourceAsMap = changeHighLightMap(sourceAsMap,highlightFieldsMap); //Because the time of es storage is set to the long timestamp type, it needs to be converted if(sourceAsMap.get("applyTime")!=null){ sourceAsMap.put("applyTime",new Date(Long.parseLong(sourceAsMap.get("applyTime")+""))); } if(sourceAsMap.get("endTime")!=null){ sourceAsMap.put("endTime",new Date(Long.parseLong(sourceAsMap.get("endTime")+""))); } //Print score //Save to list list.add(sourceAsMap); }
changeNestedHighLightMap method
private static void changeNestedHighLightMap(Map<String, SearchHits> innerHits, Map<String, Object> sourceAsMap) { SearchHit[] annexes = innerHits.get("annex").getHits(); if(annexes!=null){ for(SearchHit searchHit:annexes){ int offset = searchHit.getNestedIdentity().getOffset();//Highlight the index of the hit array Map<String, HighlightField> highlightFields = searchHit.getHighlightFields(); List<Map<String,Object>> lm=(List<Map<String,Object>>)sourceAsMap.get("annex"); Map<String, Object> map = lm.get(offset); HighlightField content = highlightFields.get("annex.content"); if(content!=null){ map.put("content", content.fragments()[0].string()); } lm.set(offset,map); } } }
Graphical results json interpretation nested highlight
{ "took": 7, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 1, "relation": "eq" }, "max_score": 1.1507283, "hits": [ { "_index": "intellectual", "_type": "_doc", "_id": "1", "_score": 1.1507283, "_source": { "keycode": "keycode-1", "name": "Painless Update 2", "id": 1, "applyTime": 1645442231033, "annex": [ { "size": null, "createTime": null, "filePath": null, "apId": null, "name": null, "simpleContent": null, "annexs": null, "id": 1, "type": null, "isLogimg": null, "content": "text=====dddd" } ] }, ==========The above is the original data ==========Here is nested Data and carry highlighted data "inner_hits": { "annex": { "hits": { "total": { "value": 1, "relation": "eq" }, "max_score": 0.5753642, "hits": [ { "_index": "intellectual", "_type": "_doc", "_id": "1", "_nested": { "field": "annex", "offset": 0 ==========nested Array hit subscript }, "_score": 0.5753642, "_source": { "size": null, "createTime": null, "filePath": null, "apId": null, "name": null, "simpleContent": null, "annexs": null, "id": 1, "type": null, "isLogimg": null, "content": "text=====dddd" }, "highlight": { "annex.content": [ ==========Here, you need to replace the original data subscript=offset Original data of "<font color='#e75213'>writing</font><font color='#E75213 '> this < / font > = = = = dddd“ ] } } ] } } } } ] } }