Got it

Too Many Open Files

599 0 1 0 0

Hello, everyone!

Symptom

The number of file handles is set to a small value, resulting in insufficient file handles. Writing files to HDFS is slow or fails.

Applicable Versions

V100R002C30SPC60*,

V100R002C50SPC20*,

V100R002C60SPC20*,

V100R002C60U10,V100R002C60U10SPC00*,

V100R002C60U20,V100R002C60U20SPC00*,

100R002C70SPC20*,

V100R002C80SPC20*

Fault Locating

1. The log records the following error: java.io.IOException: Too many open files.

2016-05-19 17:18:59,126 | WARN  | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@142ff9fa | YSDN12:25009:DataXceiverServer:  | org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:160) 
java.io.IOException: Too many open files 
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) 
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) 
at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100) 
at org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:134) 
at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:137) 
at java.lang.Thread.run(Thread.java:745)

2. The error indicates insufficient file handles. File handles cannot be opened. As a result, writing files is slow or fails.

Solution

1. Run the ulimit -a command to check the maximum number of file handles set for the involved node. If the value is small, change it to 640000.

ulimit -a

2. Run the vi /etc/security/limits.d/90-nofile.conf command to edit this file. Set the number of file handles to 64000. If the file does not exist, create it and add the following contents.

nofile

3. Open another terminal. Run the ulimit -a command to check whether the modification is successful. If the modification fails, perform the preceding operations again.

4. If the number of file handles is set to 640,000 after you run the ulimit -a command, run the jps command to check the PID of the corresponding process, and then run the cat /proc/PID/limits command to check the maximum number of file handles of the process. If the value of Max open files is still far less than 640,000, check whether the OS version is CentOS or Red Hat 7.4.

5. Run the vi /etc/security/limits.conf command to edit this file. Modify the number of file handles.

vi /etc/security/limits.conf

6. If the OS version is CentOS7.4 or Red Hat 7.4, add UsePAM yes to the /etc/ssh/sshd_config file.

7. Restart the sshd service, DataNode, and NodeAgent processes. If the restart of ssd service fails, reboot the host.

This is my solution, how about yours? Go ahead and share it with us!


Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.