The Logcache Disk of Fails

8

1.Log in to the FusionCompute at the production site. For details, see Logging In to the FusionCompute in the FusionCompute V100R005Cxx Software Installation Guide.
2.Select VM and Template.

The VM and Template page is displayed.


3.Select the VRG VM and choose Operation > Stop.
Stop the VRG VM as prompted. When the VM state becomes Stopped, the VM is successfully stopped.

4.Log in to OceanStor BCManager and click Settings.
The Settings page is displayed.

5.In the navigation tree on the left, choose Resource Mapping > VRG.
The VRG page is displayed.

6.Select the faulty VRG. In the Protected VM area, select the VM in the VRG. Click Move and select Force Remove.
Move the VM in the VRG to another VRG.

7.In the FusionCompute , detach the logcache disk from the VRG VM. For details. see Detaching a Disk from a VM in the FusionCompute V100R005Cxx Storage Management Guide.
The logcache disk of the VRG is the second disk whose capacity is 100 GB.

8.Attach a new logcache disk to the VRG VM. For details. see Attaching a Disk to a VM in the FusionCompute Storage Management Guide.
9.In the FusionCompute, choose Operation > Start in the line where the VRG VM resides.
Start the VRG VM. When the VM state becomes running, the VM is successfully started.

Other related questions:
The Logcache Disk of Fails
1.Log in to the FusionCompute at the production site. For details, see Logging In to the FusionCompute in the FusionCompute V100R005Cxx Software Installation Guide. 2.Select VM and Template. The VM and Template page is displayed. 3.Select the VRG VM and choose Operation > Stop. Stop the VRG VM as prompted. When the VM state becomes Stopped, the VM is successfully stopped. 4.Log in to OceanStor BCManager and click Settings. The Settings page is displayed. 5.In the navigation tree on the left, choose Resource Mapping > VRG. The VRG page is displayed. 6.Select the faulty VRG. In the Protected VM area, select the VM in the VRG. Click Move and select Force Remove. Move the VM in the VRG to another VRG. 7.In the FusionCompute , detach the logcache disk from the VRG VM. For details. see Detaching a Disk from a VM in the FusionCompute V100R005Cxx Storage Management Guide. The logcache disk of the VRG is the second disk whose capacity is 100 GB. 8.Attach a new logcache disk to the VRG VM. For details. see Attaching a Disk to a VM in the FusionCompute Storage Management Guide. 9.In the FusionCompute, choose Operation > Start in the line where the VRG VM resides. Start the VRG VM. When the VM state becomes running, the VM is successfully started.

Number of failed disks allowed by RAID 2.0+
The number of failed disks allowed by RAID 2.0+ depends on the RAID level. RAID 3 and RAID 5 only allow one disk to fail. RAID 6 and RAID 50 only allow two disks to fail. RAID 10 only allows N disks to fail (RAID 10 is composed of 2 x N disks).

DR Fails to Be Started After Disks Are Bound
1.Log in to the portal of FusionCompute at the production site. 2.Check whether there is a message stating that the DR fails after disks are bound. If yes, go to 3. If no, contact technical support. 3.Unbind disks and then bond them again or shut down the virtual machine and then restart it, ensuring that no failure message is displayed. 4.Perform the protected group to start DR.

OceanStor 9000 V100R001C01 data recovery for a failed disk
OceanStor 9000 V100R001C01 recovers data for a failed disk using the following methods: 1. If the disk has been offline for at most 10 minutes and has turned offline for no more than four times within 10 minutes, OceanStor 9000 V100R001C01 updates data changes within the offline duration to a normal disk and ensures zero loss of original data on the failed disk. 2. If the disk has been offline for over 10 minutes or has returned offline for five times within 10 minutes, or the failed disk cannot be restored and cannot go online again, OceanStor 9000 V100R001C01 obtains the data on the failed disk through calculations and writes the data to other normal disks.

Maximum number of failed disks supported by Erasure Code
The maximum number of failed disks supported by Erasure Code is related to the M+N policy. The number of failed disks is smaller than N. If case of node failure, the maximum number of failed nodes supported by Erasure Code is N. If failed disks also exist, the maximum number of failed nodes supported by Erasure Code is smaller than N. For details, refer to the technical principles of Erasure Code.

If you have more questions, you can seek help from following ways:
To iKnow To Live Chat
Scroll to top