The connectivity of the remote namenode while taking the backup of the HDFS data

Created: Jun 10, 2019 21:18:36Latest reply: Jun 11, 2019 20:14:26 208 5 0 0
  Rewarded Hi-coins: 0 (problem resolved)

As I need to take the backup of the HDFS data to the standby cluster.


For this, I enabled the cross-cluster replication and the cluster is able to find the remote name node. Once I start the job, it is showing errors like


"[2019-06-10 18:48] Detail: Fail to copy file by HDFS Distcp. Error: Failed on local exception: java.io.IOException: Couldn't setup connection for backup/manager@HADOOP.COM to H00101992/10.19.150.92:25000; Host Details : local host is: "linux-174/10.19.150.174"; destination host is: "H00101992":25000; 
[2019-06-10 18:48] Temporary files are cleared successfully after backup checkpoint "


Please provide the suggestions to fix the above issue.


Thanks in advance,


Sudarsan


  • x
  • convention:

Featured Answers
Moderator Created Jun 10, 2019 22:26:20 Helpful(0) Helpful(0)

Hello!

You can perform the following steps:

1. Checking whether the Cluster Mutual Trust is configured.

2. Whether the network is normal.

3. Whether to enable the synchronization in yarn.
  • x
  • convention:

All Answers
songminwang Moderator Created Jun 10, 2019 22:26:20 Helpful(0) Helpful(0)

Hello!

You can perform the following steps:

1. Checking whether the Cluster Mutual Trust is configured.

2. Whether the network is normal.

3. Whether to enable the synchronization in yarn.
  • x
  • convention:

wissal MVE Created Jun 11, 2019 04:10:02 Helpful(0) Helpful(0)

Hello,
As you need to take the backup of the HDFS data to the standby cluster.
To resolve the issue please
You have to check whether HDFS of the standby cluster has sufficient space....Target Path: indicates the full path of the HDFS directory for storing.
Thanks
  • x
  • convention:

Sudarsan Created Jun 11, 2019 14:45:28 Helpful(0) Helpful(0)

Posted by wissal at 2019-06-11 04:10 Hello,As you need to take the backup of the HDFS data to the standby cluster.To resolve the issue pl ...
Hello,

we have enough space in the target(standby) cluster.
  • x
  • convention:

wissal MVE Created Jun 11, 2019 18:01:56 Helpful(0) Helpful(0)

Posted by Sudarsan at 2019-06-11 07:45 Hello,we have enough space in the target(standby) cluster.
Hello,
Please check, hadoop.rpc.protection of the two HDFS clusters must be set to the same data transmission mode. The privacy indicates that the channels are encrypted by default. The authentication indicates that channels are not encrypted.
Thanks

  • x
  • convention:

TingtingGG Created Jun 11, 2019 20:14:26 Helpful(0) Helpful(0)

you config the wrong hosts, please chek.
or you need follow the document to do the following steps:
Use DistCp tools to config both clusters.
1. Log in to FusionInsight Manager of a cluster.
2. Choose Services > Yarn > Service Configuration and set Type to All.
3. In the navigation tree, choose Yarn > Distcp.
4. Set dfs.namenode.rpc-address.haclusterX.remotenn1 to the service IP address and RPC port number of one NameNode instance of the peer cluster, and set dfs.namenode.rpc-address.haclusterX.remotenn2 to the service IP address and RPC port number of the other NameNode instance of the peer cluster.
dfs.namenode.rpc-address.haclusterX.remotenn1 and dfs.namenode.rpc-address.haclusterX.remotenn2 do not distinguish active and standby NameNode instances. The default NameNode RPC port number is 25000 and cannot be modified on FusionInsight Manager.
Examples of modified parameter values: 10.1.1.1:25000 and 10.1.1.2:25000.
If the Federation is configured in the peer cluster with multiple pairs of NameNodes (multiple NameServices), here you can only configure the RPC addresses of the two NameNodes in one of the NameService. Configuring the RPC addresses of two NameNodes that do not belong to the same NameService is prohibited.
5. Click Save Configuration, select Restart the role instance, and click OK to start the Yarn service.
After the system displays "Operation succeeded", click Finish. The Yarn service is successfully started.

5.Log in to FusionInsight Manager of the other cluster and repeat the preceding operations.
  • x
  • convention:

Reply

Reply
You need to log in to reply to the post Login | Register

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " Privacy."
If the attachment button is not available, update the Adobe Flash Player to the latest version!

Login and enjoy all the member benefits

Login
Fast reply Scroll to top