Hello, everyone!
This post will share with you some FusionCompute storage virtualization concepts and technologies.
FusionCompute Storage Virtualization Management
Basic Storage Concepts in FusionCompute
Storage resources
Storage resources indicate physical storage devices, such as IP SAN, FC SAN, and NAS.
Storage device
A storage device indicates a management unit in the storage resource. A storage device can be a logical unit number(LUN), a FusionStorage storage pool, or a shared NAS directory.
Datastore
A datastore indicates a manageable and operable logical unit in a virtualized system.
Huawei aims to build highly competitive virtualization platforms for telecom carriers using Fusion Sphere's storage virtualization technology. This technology is based on the open-source KVM and has been optimized in terms of security, functionality, performance, and reliability. It provides the following features:
Storage device compatibility: The FusionSphere storage virtualization technology hides differences between various storage devices, including IP SAN, Fiber Channel (FC) SAN, network-attached storage (NAS) storage devices, and local disks. It offers file-level service operations based on file systems.
Comprehensive functionality: FusionSphere storage virtualization provides thin provisioning, incremental snapshots, cold and live storage migration, linked cloning, and VM disk capacity expansion.
Homogeneous service capability: Services run at the virtualization layer. Fusion Sphere storage virtualization provides homogeneity even when different storage devices are used at the underlying layer and have no special requirements for storage devices.
FusionCompute Storage Virtualization Architecture
The FusionSphere storage virtualization platform consists of a file system, disk drives, and disk tools. Block devices, such as storage area network (SAN) devices and local disks, are connected to servers. The block device driver layer and generic block layer provide an abstract view of block devices and present a single storage device to hosts.
File systems are created on storage devices that can be accessed by hosts. To create a file system, the host formats storage devices, writes metadata and in-node file system data to the storage devices, map files to block devices, and manages the block devices, including space allocation and reclamation. The file system makes operations on block devices easy and painless. VM disks are files stored in the file system.
A VM disk can be used only after it is attached to a VM using disk drivers and managed by QEMU. The front-end driver receives all VM I/O operations and forwards them to the QEMU process. The QEMU process then converts these operations to I/O operations in the user driver for writing the data into the disk files.
Attributes and data blocks are included in VM disks. The disk tool can be used to perform VM disk-related operations, including parsing disk file headers, reading and modifying disk attributes, and creating data blocks for disks.
FusionCompute Storage Model
Virtual Cluster File System
1. Introduction to VIMS
The virtual Image Management System(VIMS) is a high-performance cluster file system that enables use of storage resources across storage systems and multiple VMs to access an integrated storage pool, significantly improving resource utilization.
The VIMS, as the basis for the virtualization of multiple storage servers, provides services such as live migration, dynamic resource scheduling, and high-availability storage.
VIMS Distributed Locking
After a VIMS volume is attached to multiple CNA nodes, these CNA nodes can access files on the VIMS volume. To ensure data consistency when multiple nodes read and write the same file, the VIMS needs to implement distributed file locks. VIMS' distributed lock manager (DLM) module implements distributed file locks and provides locking for clusters.
The caller ensures synchronization between clusters through DLM.
VIMS uses the distributed symmetric lock mechanism. In VIMS, there are multiple resource masters. Each master corresponds to only one lock resource. Different masters are not deployed on the same node. There is no central management node.
A node can be a lock resource master under the following conditions:
It is the first node that applies to access a resource.
If multiple nodes access a resource at the same time, the node with the lowest VIMS node ID is used as the master.
If a node is faulty, a new master is selected from resources managed by the node VIMS distributed file lock traffic enters the management network plane.
VIMS Heartbeat
VIMS has two types of heartbeats: disk heartbeats and network heartbeats. Disk heartbeats check whether hosts can properly read and write shared storage. Network heartbeats check whether the network communication between hosts is normal. The CNA node with VIMS attached, which functions as a cluster file system in a cluster, is not an individual node and communicates with other nodes through the network heartbeat.
If the node network is not restored within 146s (20s + 126s,) a network division task restarts the invalid partition node and separates the failed node.
That's all, thanks!