Failed to Start Spark During the Cluster Installation

During the cluster installation, the system displays a message indicating that the Spark service fails to be started.Procedure:1.Use PuTTY to log in to each node as user root.
2.Run the following commands to go to /usr/lib64/ and check whether the libhadoop.so file exists:cd /usr/lib64,find libhadoop*.If the following information is displayed, the file exists:libhadoop.so ,libhadoop.so.1.0.0
3.Run the following command to delete the file:rm -f libhadoop.so.1.0.0,rm -f libhadoop.so
4.Restart the Spark service. The fault is rectified.

Scroll to top