Hello, everyone!
In this post, I want to introduce some principles of storage virtualization technologies, such as snapshot, linked clone, and storage migration.
Introduction of Snapshot
A snapshot preserves the VM status and data, including disk, memory, and register data. Users can repeatedly revert the VM to any of its previous states by applying a snapshot to the VM. Before performing critical operations, such as applying system patches, upgrades, or destructive tests, VM users are advised to take snapshots for quick restoration.
FusionCompute supports common snapshots, consistency snapshots, and memory snapshots.
Common snapshot: The snapshot saves the current disk data.
If memory snapshot is selected, the snapshot saves the current memory data on the VM.
If consistency snapshot is selected, VM cache data will be saved before a snapshot is taken.
Snapshot Principles
Creating a snapshot
Rolling back a snapshot
Deleting a snapshot
About the delete snapshot steps, see How to delete the snapshot.
Introduction of linked cloning
Linked cloning is used to provision multiple VMs created based on the same template. By creating delta disks (a differencing disk created during the cloning process) for the system disk in the VM template and attaching the delta disks to different VMs, you can clone multiple VMs sharing virtual disks with the parent
VM. Linked cloning is used when a large number of VMs with the same or similar configuration need to be created with low-performance requirements.
To create multiple linked clones, you can store the parent VM's frequently used data in the memory of a host so that the data can be quickly retrieved to improve VM startup and run speeds.
Linked clone implementation principles
Introduction of storage live migration
FusionSphere provides VM disk cold and live migration. Cold migration moves VM disk files from one datastore to another without service interruption. The live migration mechanism is as follows:
To implement live migration, the system first uses redirect-on-write to write VM data to a differencing disk in the destination server, the source disk is then set to read-only.
Data blocks on the source disk are read and merged into the target differencing disk. After all data is merged, the target differencing disk will contain all data on the virtual disk.
The source disk file is removed, and the differencing disk is changed to a dynamic disk. The disk in the destination server will then be able to run as normal.
Raw device mapping of storage resources
RDM provides a mechanism for VMs to directly access LUNs on physical storage subsystems(Fibre Channel or iSCSI only). Physical device mapping enables VMs to identify SCSI disks.
RDM can transparently transmit SCSI commands issued by a VM to physical SCSI devices, avoiding loss of functionality caused by command simulation from the virtualization layer.
Limitations: The following functions are not supported: linked cloning, thin provisioning, online and offline VHD capacity expansion, incremental storage snapshot, iCache, storage live migration, storage QoS, disk backup, and VM to template conversion.
Technical characteristics:
A VM has direct access to a LUN on SAN devices.
RDM is compatible with FC SAN and IP SAN.
RDM is used for applications that require high-performance storage devices, such as Oracle RAC.
Storage capacity expansion
FusionCompute supports virtual disk expansion and datastore expansion.
FusionCompute enables users to expand the capacity of an online or offline disk.
For common disks, the data area is expanded and zeroed out. For a thick provisioned lazy-zeroed disk, the data area is expanded, and the space is reserved.
For a thin provisioned disk, only the data area is expanded.
Datastore expansion enables one datastore to manage multiple physical LUN spaces. Datastores can be expanded by adding additional physical LUNs to the database or expanding the capacity of the physical LUN and datastore. Datastore scalability can be improved in this manner.
Datastore expansion principles
When a datastore needs to be expanded, VIMS adds the new storage space in linear mapping mode to the end of a virtual block device (VBD) on the active node, adds the new segmented storage space to the file system (updating the metadata in the file system), completing datastore expansion on the active node. Because VBD information is stored in the memory of a node, when other nodes discover storage space changes, VIMS updates the VBD information to complete VBD expansion.
That's all, thanks!