Got it

Hive: Common Faults of Clients

Latest reply: Dec 18, 2019 07:37:52 5970 1 6 0 0

Hello, everyone!

The post will share with you Hive.

1.1 Common Faults of Clients

1.1.1 "authentication failed" Is Prompted When a Shell Client Is Connected

Applicable Versions

V100R002C30SPC60*

V100R002C50SPC20*

V100R002C60SPC20*

V100R002C60U10,V100R002C60U10SPC00*

V100R002C60U20,V100R002C60U20SPC00*

V100R002C70SPC20*

Symptom

In clusters in security mode, when the HiveServer service is normal but the beeline command fails to be executed on the Shell client, the system prompts "authentication failed".

Debug is true storeKey false useTicketCache true useKeyTab false doNotPrompt false ticketCache is null isInitiator true KeyTab is null refreshKrb5Config is false principal is null tryFirstPass is false useFirstPass is false storePass is false clearPass is false  
Acquire TGT from Cache  
Credentials are no longer valid  
Principal is null  
null credentials from Ticket Cache  
[Krb5LoginModule] authentication failed 
No password provided

Possible Causes

l   The client user does not perform security authentication.

l   Kerberos authentication expires.

Solution

Step 1      Log in to the node where the Hive client resides.

Step 2      Run the source  $client_home/bigdata_env command.

Run the klist command to check whether there is a valid ticket in the local end. The following information shows that the ticket became valid at 14:11:42 on December 24, 2016, and expired at 14:11:40 on December 25, 2016. In the period of time, the ticket is available.

klist 
Ticket cache: FILE:/tmp/krb5cc_0 
Default principal: admin@HADOOP.COM 
Valid starting     Expires            Service principal 
12/24/16 14:11:42  12/25/16 14:11:40  krbtgt/HADOOP.COM@HADOOP.COM

Step 3      Run the kinit username command for authentication and log in to the client again.

----End


1.1.2 "The ZooKeeper client is AuthFailed" Is Prompted on the Client

Applicable Versions

V100R002C30SPC60*

V100R002C50SPC20*

V100R002C60SPC20*

V100R002C60U10,V100R002C60U10SPC00*

V100R002C60U20,V100R002C60U20SPC00*

V100R002C70SPC20*

Symptom

In clusters in security mode, when the HiveServer service is normal and SQL is executed by using the JDBC interface to connect to HiveServer, "The ZooKeeper client is AuthFailed" is reported, as shown in the following.

14/05/19 10:52:00 WARN utils.HAClientUtilDummyWatcher: The ZooKeeper client is AuthFailed 
 14/05/19 10:52:00 INFO utils.HiveHAClientUtil: Exception thrown while reading data from znode.The possible reason may be connectionless. This is recoverable. Retrying..  
 14/05/19 10:52:16 WARN utils.HAClientUtilDummyWatcher: The ZooKeeper client is AuthFailed  
 14/05/19 10:52:32 WARN utils.HAClientUtilDummyWatcher: The ZooKeeper client is AuthFailed  
 14/05/19 10:52:32 ERROR st.BasicTestCase: Exception: Could not establish connection to active hiveserver  
 java.sql.SQLException: Could not establish connection to active hiveserver

Or "Unable to read Hiveserver2 uri from ZooKeeper" is reported, as shown in the following.

Exception in thread "main" java.sql.SQLException: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper 
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:144)  
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) 
at java.sql.DriverManager.getConnection(DriverManager.java:664) 
at java.sql.DriverManager.getConnection(DriverManager.java:247) 
at JDBCExample.main(JDBCExample.java:82) 
Caused by: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 configs from ZooKeeper 
at org.apache.hive.jdbc.ZooKeeperHiveClientHelper.configureConnParams(ZooKeeperHiveClientHelper.java:100) 
at org.apache.hive.jdbc.Utils.configureConnParams(Utils.java:509) 
at org.apache.hive.jdbc.Utils.parseURL(Utils.java:429) 
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:142) 
... 4 more 
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hiveserver2 
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) 
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) 
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:2374) 
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) 
at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) 
at org.apache.curator.RetryLo,op.callWithRetry(RetryLoop.java:107) 
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:200) 
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) 
at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38)

Possible Causes

When the client connects to HiveServer, the HiveServer URI is automatically obtained from ZooKeeper. If ZooKeeper connection authentication is abnormal, the HiveServer address cannot be obtained from ZooKeeper correctly.

When ZooKeeper connection authentication is performed, krb5.conf, principal, keytab, and related information must be loaded to the client. Authentication failure causes are as follows:

l   The user.keytab path is incorrectly entered.

l   user.principal is incorrectly entered.

l   The cluster has switched the domain name. However, the old principal is used when the client combines the URL.

l   Firewall configurations do not allow the client to pass Kerberos authentication.

Solution

Step 1      Ensure that the user can properly access the user.keytab file in related paths on the client node.

Step 2      Ensure that the user's user.principal that is used corresponds to the specified keytab file.

Run the klist -kt keytabpath/user.keytab command to check it.

Step 3      If the cluster switches the domain name, the principal field used in the URL must be a new domain name.

For example, the default value is hive/hadoop.hadoop.com@HADOOP.COM. If the cluster has switched the domain name, the field must be changed accordingly. For example, if the domain name is abc.com, it is hive/hadoop.abc.com@ABC.COM.

Step 4      Ensure that authentication is normal and HiveServer can be connected.

Run the following commands on the client:

source $client_home/bigdata_env

kinit username

Run the beeline command on the client to ensure normal running.

----End


1.1.3 "Invalid function" Is Prompted When the UDF Function Is Used

Applicable Versions

V100R002C50SPC20*

V100R002C60SPC20*

V100R002C60U10,V100R002C60U10SPC00*

V100R002C60U20,V100R002C60U20SPC00*

V100R002C70SPC20*

Symptom

When the UDF function is created on the Hive client using Spark, "ERROR 10011" indicating "invalid function" is reported, as shown in the following:

Error: Error while compiling statement: FAILED: SemanticException [Error 10011]: Line 1:7 Invalid function 'test_udf' (state=42000,code=10011)

The preceding problem occurs when multiple HiveServers use the UDF function. For example, if metadata is not synchronized in time when the UDF created on HiveServer2 is used on HiveServer1, the preceding error is reported when clients on HiveServer1 are connected.

Possible Causes

Metadata shared by multiple HiveServers or Hive and Spark is not synchronized, causing memory data inconsistency between different HiveServer instances and invalid UDF.

Solution

Step 1      Synchronize new UDF information to HiveServer and reload the function.

----End


1.1.4 If Text Files Are Compressed Using the ARC4 Algorithm, Garbled Characters Are Returned After the select Query

Applicable Versions

V100R002C30SPC60*

V100R002C50SPC20*

V100R002C60SPC20*

V100R002C60U10,V100R002C60U10SPC00*

V100R002C60U20,V100R002C60U20SPC00*

V100R002C70SPC20*

Symptom

If a Hive query result table is compressed and stored using the ARC4 algorithm, garbled characters are returned after the select * query is conducted in the result table.

Solution

Step 1      If garbled characters are returned after the select query, set parameters in Beeline.

set mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.encryption.arc4.ARC4BlockCodec;

set hive.exec.compress.output=true;

Step 2      Import the table to a new table using block decompression.

insert overwrite table tbl_result select * from tbl_source;

Step 3      Query again.

select * from tbl_result;

----End

That's all, thanks!

This article contains more resources

You need to log in to download or view. No account? Register

x

Hive: Common Faults of Clients

That is very good!
View more
  • x
  • convention:

Comment

You need to log in to comment to the post Login | Register
Comment

Notice: To protect the legitimate rights and interests of you, the community, and third parties, do not release content that may bring legal risks to all parties, including but are not limited to the following:
  • Politically sensitive content
  • Content concerning pornography, gambling, and drug abuse
  • Content that may disclose or infringe upon others ' commercial secrets, intellectual properties, including trade marks, copyrights, and patents, and personal privacy
Do not share your account and password with others. All operations performed using your account will be regarded as your own actions and all consequences arising therefrom will be borne by you. For details, see " User Agreement."

My Followers

Login and enjoy all the member benefits

Login

Block
Are you sure to block this user?
Users on your blacklist cannot comment on your post,cannot mention you, cannot send you private messages.
Reminder
Please bind your phone number to obtain invitation bonus.