Hello all,
this case mainly talks about "Memory Used Up Of Elasticsearch"
Applicable Version
6.5.x
Context and Symptom
The memory is used when Elasticsearch is running.
Cause Analysis
Scenario 1: The memory parameter configuration is incorrect.
Scenario 2: The size of the query result set is too large.
Scenario 3: A deep pagination query is performed.
Scenario 4: The amount of data aggregated is too large, and the aggregation result set is too large.
Scenario 5: A full table scan query involves too many indexes and shards.
Scenario 6: The request submitted in bulk is too large.
Scenario 7: A large number of segments exist on the node.
Troubleshooting Procedure
Check the service- and instance-level parameter settings on FusionInsight Manager. Ensure that the GC_OPTS parameter is set to 30G and the value of -Xms is the same as that of -Xmx.
Enable the slow query log function of Elasticsearch.
Check the slow query logs that are recorded a long time ago and those recorded close to the fault occurrence time. Pay attention to the following keywords:
Table 1 Items of Query keyword
Scenario
Keyword
Handling Suggestion
The size returned in the query result is too large.
max_result_window. The default value is 10000.
Elasticsearch is suitable for top N query, but not for full query. The query mode must be set to scroll or search_after.
Deep pagination query
{
"from":5000, //from: defines where data obtaining starts.
"size":100 //size: defines the number of data records to be obtained.
}
The amount of data aggregated is too large, and the aggregation result set is too large.
aggregations + size
Modify the result set size limit in the request.
A full table scan query involves too many indexes and shards.
GET /_all/_search
{
"query": {
"match_all": {}
}
}
You are advised to query data by time period. If the query period is long, query data in batches.
Check whether a large number of bulk requests are queued in the thread pool and confirm the bulk request size with the customer. The recommended size is 5 MB to 16 MB.
Check the number of segments and the memory size occupied by the segments. For indexes that have no data to be written, you are advised to merge or age them.
Run the jmap -histio and dump commands to analyze the problem.
jmap -histo <pid> jmap -dump:format=b,file=/tmp/esdump.hporf <pid>
Any solutions will be appreciated!