Hi team, here's a new case.
Abstract
1. Log in to the compute node in the cascaded system. (For Region Type1 and AZ1, log in to any node in AZ1 at the cascaded layer. Generally, log in to the controller node.) and import environment variables.
2. Determine the resource pool to which the new compute node belongs. Resource pool = background kvm001 and kvm002. You can run this command on the background to view the number of resource pools. The cps host-list | grep blockstorage-driver | uniq cps host-list | grep blockstorage-driver | uniq is shown in the following figure. This environment has only one resource pool. kvm001.
3. Check the section where the driver needs to be installed in the corresponding resource pool.
1. Log in to the compute node in the cascaded FusionSphere OpenStack system. (For Region Type1 and AZ1, log in to any node in AZ1 at the cascaded layer. Generally, log in to the controller node.) and import environment variables.
2. Determine the resource pool to which the new compute node belongs. Resource pool = background kvm001 and kvm002.
You can run this command on the background to view the number of resource pools.
cps host-list | grep blockstorage-driver | uniq

As shown in the figure, this environment has only one resource pool. kvm001.
3. Check the IP address of the node where the driver needs to be installed in the resource pool.
cps template-instance-list --service cinder cinder-backup-kvm001 (This parameter can be replaced with cinder-backup-kvm002 cinder-backup. Determine the value based on the command output in step 2.)

eBackupDriver needs to be installed only on the nodes in the command output.
4. If the compute node to be added is not in the command output of the corresponding resource pool, then skip the steps.