Got it

Memory Used Up Of Elasticsearch

98 0 0 0 0

Hello all, 

this case mainly talks about "Memory Used Up Of Elasticsearch"

Applicable Version

6.5.x

Context and Symptom

The memory is used when Elasticsearch is running.

Cause Analysis

Scenario 1: The memory parameter configuration is incorrect.

Scenario 2: The size of the query result set is too large.

Scenario 3: A deep pagination query is performed.

Scenario 4: The amount of data aggregated is too large, and the aggregation result set is too large.

Scenario 5: A full table scan query involves too many indexes and shards.

Scenario 6: The request submitted in bulk is too large.

Scenario 7: A large number of segments exist on the node.

Troubleshooting Procedure

  1. Check the service- and instance-level parameter settings on FusionInsight Manager. Ensure that the GC_OPTS parameter is set to 30G and the value of -Xms is the same as that of -Xmx.

  2. Enable the slow query log function of Elasticsearch.

  3. Check the slow query logs that are recorded a long time ago and those recorded close to the fault occurrence time. Pay attention to the following keywords:

                                                       Table 1 Items of Query keyword

    Scenario

    Keyword

    Handling Suggestion

    The size returned in the query result is too large.

    max_result_window. The default value is 10000.

    Elasticsearch is suitable for top N query, but not for full query. The query mode must be set to scroll or search_after.

    Deep pagination query

    {

    "from":5000, //from: defines where data obtaining starts.

    "size":100 //size: defines the number of data records to be obtained.

    }

    The amount of data aggregated is too large, and the aggregation result set is too large.

    aggregations + size

    Modify the result set size limit in the request.

    A full table scan query involves too many indexes and shards.

    GET /_all/_search

    {

    "query": {

    "match_all": {}

    }

    }

    You are advised to query data by time period. If the query period is long, query data in batches.


  4. Check whether a large number of bulk requests are queued in the thread pool and confirm the bulk request size with the customer. The recommended size is 5 MB to 16 MB.

  5. Check the number of segments and the memory size occupied by the segments. For indexes that have no data to be written, you are advised to merge or age them.

  6. Run the jmap -histio and dump commands to analyze the problem.


    jmap -histo <pid>  jmap -dump:format=b,file=/tmp/esdump.hporf <pid>

        Any solutions will be appreciated!

Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.