Hi team!
Here's a case that After the storage system is started, the location of the disk enclosure where the coffer disks reside is adjusted. Then the system is restarted.
Possible Causes
To prevent data loss on the coffer disks, you are not allowed to adjust the location of the disk enclosure where the coffer disks reside.
When the storage system is started, the system checks whether the actual configuration of the disk enclosure is the same as that recorded in the system.
If the disk enclosure configuration is different from that recorded in the system, the check is not passed and the system fails to be started.
Recommended Actions
Method 1: Put the disk enclosure with the original configuration in its previous position,
connect the disk enclosure to the system, and power on the system again.
Method 2: Replace the disk enclosure with a spare part and power on the system again.
Method 3: Clear the configuration data and power on the system again.
The operation procedure is as follows:
1. Use an SSH tool, such as Xshell 5, PuTTY 0.63, SecureCRT 6.7, or one of their later versions, to log in as user admin
(the default password is Admin@storage) to the management network port on the storage device.
2. Run the showsysstatus command to check whether the current node ID is the same as that of the master node in the cluster.
If the current node ID is the same as that of the master node in the cluster, the current node is the master node.
Storage: minisystem> showsysstatus
Show system status
admin:/diagnose>sys showcls
mode : normal
status : none
node cfg : 2
node max : 4
group cfg : 1
group max : 2
product : 0
serial : SN201406200123456789
WWN : 0x2100002233551144
local node id : 1
normalNodeBitmap : 3
faultNodeBitmap : 0
offlineNodeBitmap: 0
standbyNodeBitmap: 0
id role status group engine
---------- ---------- ---------- ---------- ----------
0 master normal 0 0
1 slave1 normal 0 0
admin:/diagnose>exit
Storage: minisystem>
3. Run the cleardb command on the master node to clear the configuration data in the system.
Storage: minisystem> cleardb
DANGER: You are going to clear the configuration data of the storage system and the previous configuration data will be lost.
Suggestion: Before you perform this operation, export the configuration data .
Have you read danger alert message carefully?(y/n)
y
Clear db in all mediums begin......
admin:/diagnose>db clearall
This is DB (cluster master) controller .
Success !!
admin:/diagnose>exit
Storage: minisystem>
4. Run the rebootsys command respectively on each controller to restart the system.
Storage: minisystem> rebootsys
Are you sure to restart?(y/n)
y
Check After Recovery
After restarting all nodes, check whether the same error code appears.
If no, the fault is rectified and no action is required.
If yes, contact R&D engineers.