[Problem Description]
Two controllers are removed and inserted by mistake. As a result, the startup fails.
[Fault Symptom]
Startup fails and the system enters the mini system.
[Cause]
The system fails to start due to the loss of dirty data or CCBD data.
The following steps apply only to devices that are initially powered on or deployed.
[Location Method]
Enter the user name and password to log in to the storage CLI (user name admin and default password Admin@storage), and run the minisystem command as prompted to log in to the minisystem system.
[Solution]
Step 1 Run the showsystrace and showsystrace 2 commands in the minisystem system of the main control board.
If FAIL ACTION is CheckRecovDirtyData, as shown in the following figure, run the sys.sh cleardirtydataflag command, and then run the rebootsys command to restart the system.

If the value of FAIL ACTION is NtfClsUtil(CCDB/DLM/C-CLS), as shown in the following figure, run the following commands on node 0:
ccdb.sh -c setccdbdirtyflag 0 0 1
ccdb.sh -c setccdbdirtyflag 0 1 2
Then, run the rebootsys command on the two controllers to restart the system.

Step 2 Run the rebootsys command to restart the system. Wait for more than 30 minutes and log in to the storage CLI. If the system still prompts you to log in to the minisystem system, proceed with subsequent operations. If you can log in to the storage device, run the show controller general command to query the controller status, as shown in the following figure. If the controller status is normal, Otherwise, go to Step 3.

Step 3 Run the sys.sh showflowtrace command to check the cluster status. The failure cause is as follows: RecoverDirtyDta:FreeVnodeLock.

Run the following command to clear the vnode and run the rebootsys command on the two controllers to reset the two controllers:
sys.sh clearvnodedirtydataflag 0
sys.sh clearvnodedirtydataflag 1