Failure Due to the Existing of the Node Agent During Cluster Installation

4

During the cluster installation, an error occurs prompting that the node agent has been installed.1.Log in to the active management node as user root using PuTTY. 2.Check whether the node agent installation information exists in controller.log on the active management node, path to the log file is /var/log/Bigdata/controller/controller.log:cat /var/log/Bigdata/controller/controller.log
If agent is already installed, log files contain the following information:...is found with existing node agent installation�?3.Obtain the node where the node agent installation fails from the log error information or the error information displayed on the FusionInsight Manager portal.
4.Log in to the node where the node agent installation fails as user root using PuTTY.
5.Uninstall the node agent.Run the script ${BIGDATA_HOME}/om-agent/nodeagent/setup/uninstall.sh.
6.On the FusionInsight Manager portal, click Retry to install the component again.

Other related questions:
Query the active node of the OpenStack controller node during eBackup installation
To query the active node of the OpenStack controller node, do as follows: 1. Log in to the OpenStack reverse proxy in SSH mode. 2. Run the su-root command and enter the password of user root as prompted to switch to user root. 3. Run the source set_env command, select openstack environment variable (keystone v3), and enter the password to import environment variables. 4. Run the cps template-instance-list --service cps cps-server command. In the command output, the node whose status is active is the active node of the controller node.

If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top