Got it

Spark tasks are slow in coarse-grained computing. When Spark invokes the getsplit method, a large number of small files are generated.

174 0 11 0 0

Symptom

The Spark task takes a long time to calculate daily SDRs. Due to the increase in the number of components in the cluster capacity expansion, the mapreduce.input.fileinputformat.list-status.num-threads optimization parameters in the product do not improve the performance on the FI side. Therefore, the cause must be analyzed.

Solution

The getMoreSplits method does not have optimization parameters. The speed of traversing files is related to the number of files. Therefore, you are advised to reduce the number of files on the service side to accelerate the running.


Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.