Skip to content

feat: add vm-migration-network setting #811

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

FrankYang0529
Copy link
Member

Problem:

Solution:

Related Issue(s):

harvester/harvester#5848

Test plan:

Additional documentation or context

Copy link

github-actions bot commented Jun 30, 2025

Name Link
🔨 Latest commit 0732160
😎 Deploy Preview https://6870adfc478ae1bdb73db919--harvester-preview.netlify.app

Copy link
Contributor

@innobead innobead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, LGTM. Just a few feedback.


- Verify the `ippools.whereabouts.cni.cncf.io` CRD exists with the following command.
- `kubectl get crd ippools.whereabouts.cni.cncf.io`
- If the Harvester cluster doesn't have `ippools.whereabouts.cni.cncf.io`, please add [these two CRDs](https://github.com/harvester/harvester/tree/v1.1.0/deploy/charts/harvester/dependency_charts/whereabouts/crds) before configuring `storage-network` setting.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why suggest using the CRDs in 1.1.0 instead of the corresponding Harvester release? Might just copy from the storage network?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this since we already have ippools in Harvester.

- VLAN ID
- Please check with your network switch setting, and provide a dedicated VLAN ID for VM Migration Network.
- Well-configured Cluster Network and VLAN Config
- Please refer Networking page for more details and configure `Cluster Network` and `VLAN Config` but not `Networks`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest adding the corresponding doc for reference.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated it.

- IP range for VM Migration Network
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format.
- Exclude IP addresses that KubeVirt pods and the VM migration network must not use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrong indent?
What IP addresses should KubeVirt not use?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@jillian-maroket jillian-maroket left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review done. Note that the structural changes that I suggested here will be applied to the "Storage Network" page as well. I will create the PR after we merge this one.

Comment on lines +7 to +13
If the user wishes to isolate VM migration traffic from the Kubernetes cluster network (i.e. the management network) or other cluster-wide workloads. Users can allocate a dedicated vm migration network to get better network bandwidth and performance.

:::note

- Avoid configuring KubeVirt configuration directly, as this can result in unexpected or unwanted system behavior.

:::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the user wishes to isolate VM migration traffic from the Kubernetes cluster network (i.e. the management network) or other cluster-wide workloads. Users can allocate a dedicated vm migration network to get better network bandwidth and performance.
:::note
- Avoid configuring KubeVirt configuration directly, as this can result in unexpected or unwanted system behavior.
:::
A VM migration network is useful for isolating migration traffic from cluster traffic on `mgmt` and other cluster-wide workloads. Using a VM migration network results in better network bandwidth and performance.
:::note
Avoid configuring KubeVirt settings directly, as this can result in unexpected or unwanted system behavior.
:::

Comment on lines +15 to +30
## Prerequisites

There are some prerequisites before configuring the Harvester VM Migration Network setting.

- Well-configured Cluster Network and VLAN Config.
- Users have to ensure the Cluster Network is configured and VLAN Config will cover all nodes and ensure the network connectivity is working and expected in all nodes.
- No VM Migration in progress before configuring the VM Migration Network setting.

:::caution

If the Harvester cluster was upgraded from v1.0.3, please check if Whereabouts CNI is installed properly before you move on to the next step. We will always recommend following this guide to check. [Issue 3168](https://github.com/harvester/harvester/issues/3168) describes that the Harvester cluster will not always install Whereabouts CNI properly.

- Verify the `ippools.whereabouts.cni.cncf.io` CRD exists with the following command.
- `kubectl get crd ippools.whereabouts.cni.cncf.io`

:::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Prerequisites
There are some prerequisites before configuring the Harvester VM Migration Network setting.
- Well-configured Cluster Network and VLAN Config.
- Users have to ensure the Cluster Network is configured and VLAN Config will cover all nodes and ensure the network connectivity is working and expected in all nodes.
- No VM Migration in progress before configuring the VM Migration Network setting.
:::caution
If the Harvester cluster was upgraded from v1.0.3, please check if Whereabouts CNI is installed properly before you move on to the next step. We will always recommend following this guide to check. [Issue 3168](https://github.com/harvester/harvester/issues/3168) describes that the Harvester cluster will not always install Whereabouts CNI properly.
- Verify the `ippools.whereabouts.cni.cncf.io` CRD exists with the following command.
- `kubectl get crd ippools.whereabouts.cni.cncf.io`
:::
## Prerequisites
Before you begin configuring the VM migration network, ensure that the following requirements are met:
- The network switches are correctly configured, and a dedicated VLAN ID is assigned to the VM migration network.
- The [cluster network](../networking/clusternetwork.md) and [VLAN network](../networking/harvester-network.md) are configured correctly. Ensure that both networks cover all nodes and are accessible.
- No virtual machines are being migrated.
- The `ippools.whereabouts.cni.cncf.io` CRD exists. You can check this using the command `kubectl get crd ippools.whereabouts.cni.cncf.io`. In certain [upgrade scenarios](https://github.com/harvester/harvester/issues/3168), the Whereabouts CNI is not installed correctly.

Comment on lines +32 to +41
## Configuration Example

- VLAN ID
- Please check with your network switch setting, and provide a dedicated VLAN ID for VM Migration Network.
- Well-configured Cluster Network and VLAN Config
- Please refer Networking page for more details and configure [Cluster Network](../networking/clusternetwork.md) and [VLAN Config](../networking/harvester-network.md).
- IP range for VM Migration Network
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format.
- Exclude IP addresses that KubeVirt pods and the VM migration network must not use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Configuration Example
- VLAN ID
- Please check with your network switch setting, and provide a dedicated VLAN ID for VM Migration Network.
- Well-configured Cluster Network and VLAN Config
- Please refer Networking page for more details and configure [Cluster Network](../networking/clusternetwork.md) and [VLAN Config](../networking/harvester-network.md).
- IP range for VM Migration Network
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format.
- Exclude IP addresses that KubeVirt pods and the VM migration network must not use.
- The IP range of the VM migration network is in the IPv4 CIDR format and must neither conflict nor overlap with Kubernetes cluster networks. You must exclude IP addresses that KubeVirt pods and the VM migration network must not use. The following addresses are reserved: `10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16`.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The information must be part of the "Prerequisites" section. I will reorganize the "Storage Network" page after we merge this PR.

Comment on lines +43 to +48
We will take the following configuration as an example to explain the details of the VM Migration Network

- VLAN ID for VM Migration Network: `100`
- Cluster Network: `vm-migration`
- IP range: `192.168.1.0/24`
- Exclude Address: `192.168.1.1/32`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove lines 43 to 48. The information is already mentioned in the CLI section, where it is relevant.

Comment on lines +50 to +52
### Harvester VM Migration Network Setting

The [`vm-migration-network` setting](./settings.md#vm-migration-network) allows you to configure the network used to isolate in-cluster VM migration traffic when segregation is required.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Harvester VM Migration Network Setting
The [`vm-migration-network` setting](./settings.md#vm-migration-network) allows you to configure the network used to isolate in-cluster VM migration traffic when segregation is required.
### `vm-migration-network` Setting
The [`vm-migration-network`](./settings.md#vm-migration-network) setting allows you to configure the network used to isolate in-cluster VM migration traffic when segregation is required.

Comment on lines +142 to +172
### Verify Configuration is Completed

#### Step 1

Check if Harvester VM Migration Network setting's status is `True` and the type is `configured`.

```bash
kubectl get settings.harvesterhci.io vm-migration-network -o yaml
```

Completed Setting Example:

```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
annotations:
vm-migration-network.settings.harvesterhci.io/hash: ec8322fb6b741f94739cbb904fc73c3fda864d6d
vm-migration-network.settings.harvesterhci.io/net-attach-def: harvester-system/vm-migration-network-6flk7
creationTimestamp: "2022-10-13T06:36:39Z"
generation: 51
name: storage-network
resourceVersion: "154638"
uid: 2233ad63-ee52-45f6-a79c-147e48fc88db
status:
conditions:
- lastUpdateTime: "2022-10-13T13:05:17Z"
reason: Completed
status: "True"
type: configured
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Verify Configuration is Completed
#### Step 1
Check if Harvester VM Migration Network setting's status is `True` and the type is `configured`.
```bash
kubectl get settings.harvesterhci.io vm-migration-network -o yaml
```
Completed Setting Example:
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
annotations:
vm-migration-network.settings.harvesterhci.io/hash: ec8322fb6b741f94739cbb904fc73c3fda864d6d
vm-migration-network.settings.harvesterhci.io/net-attach-def: harvester-system/vm-migration-network-6flk7
creationTimestamp: "2022-10-13T06:36:39Z"
generation: 51
name: storage-network
resourceVersion: "154638"
uid: 2233ad63-ee52-45f6-a79c-147e48fc88db
status:
conditions:
- lastUpdateTime: "2022-10-13T13:05:17Z"
reason: Completed
status: "True"
type: configured
```
### Post-Configuration Steps
1. Verify that the setting's status is `True` and the type is `configured` using the following command:
```bash
kubectl get settings.harvesterhci.io vm-migration-network -o yaml
```
Example:
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
annotations:
vm-migration-network.settings.harvesterhci.io/hash: ec8322fb6b741f94739cbb904fc73c3fda864d6d
vm-migration-network.settings.harvesterhci.io/net-attach-def: harvester-system/vm-migration-network-6flk7
creationTimestamp: "2022-10-13T06:36:39Z"
generation: 51
name: storage-network
resourceVersion: "154638"
uid: 2233ad63-ee52-45f6-a79c-147e48fc88db
status:
conditions:
- lastUpdateTime: "2022-10-13T13:05:17Z"
reason: Completed
status: "True"
type: configured
```

Comment on lines +174 to +182
#### Step 2

Verify the readiness of all KubeVirt `virt-handler` pods, and confirm that their networks are correctly configured.

Execute the following command to inspect a pod's details:

```bash
kubectl -n harvester-system describe pod <pod-name>
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Step 2
Verify the readiness of all KubeVirt `virt-handler` pods, and confirm that their networks are correctly configured.
Execute the following command to inspect a pod's details:
```bash
kubectl -n harvester-system describe pod <pod-name>
```
1. Verify that all KubeVirt `virt-handler` pods are ready and that their networks are correctly configured.
You can inspect pod details using the following command:
```bash
kubectl -n harvester-system describe pod <pod-name>
```

Comment on lines +184 to +224
#### Step 3

Check the `k8s.v1.cni.cncf.io/network-status` annotations and ensure that an interface named `migration0` exists, with an IP address within the designated IP range.

Users could use the following command to show all `virt-handler` pods to verify.

```bash
kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o yaml
```

Correct Network Example:

```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 004522bc8468ea707038b43813cce2fba144f0e97551d2d358808d57caf7b543
cni.projectcalico.org/podIP: 10.52.2.122/32
cni.projectcalico.org/podIPs: 10.52.2.122/32
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "k8s-pod-network",
"ips": [
"10.52.2.122"
],
"default": true,
"dns": {}
},{
"name": "harvester-system/vm-migration-network-6flk7",
"interface": "migration0",
"ips": [
"10.1.2.1"
],
"mac": "c6:30:6f:02:52:3e",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: vm-migration-network-6flk7@migration0

Omitted...
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Step 3
Check the `k8s.v1.cni.cncf.io/network-status` annotations and ensure that an interface named `migration0` exists, with an IP address within the designated IP range.
Users could use the following command to show all `virt-handler` pods to verify.
```bash
kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o yaml
```
Correct Network Example:
```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 004522bc8468ea707038b43813cce2fba144f0e97551d2d358808d57caf7b543
cni.projectcalico.org/podIP: 10.52.2.122/32
cni.projectcalico.org/podIPs: 10.52.2.122/32
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "k8s-pod-network",
"ips": [
"10.52.2.122"
],
"default": true,
"dns": {}
},{
"name": "harvester-system/vm-migration-network-6flk7",
"interface": "migration0",
"ips": [
"10.1.2.1"
],
"mac": "c6:30:6f:02:52:3e",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: vm-migration-network-6flk7@migration0
Omitted...
```
1. Check the `k8s.v1.cni.cncf.io/network-status` annotations and verify that an interface named `migration0` exists. The IP address of this interface must be within the designated IP range.
You can retrieve a list of `virt-handler` pods using the following command:
```bash
kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o yaml
```
Example:
```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 004522bc8468ea707038b43813cce2fba144f0e97551d2d358808d57caf7b543
cni.projectcalico.org/podIP: 10.52.2.122/32
cni.projectcalico.org/podIPs: 10.52.2.122/32
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "k8s-pod-network",
"ips": [
"10.52.2.122"
],
"default": true,
"dns": {}
},{
"name": "harvester-system/vm-migration-network-6flk7",
"interface": "migration0",
"ips": [
"10.1.2.1"
],
"mac": "c6:30:6f:02:52:3e",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: vm-migration-network-6flk7@migration0
Omitted...
```


## Best Practices

- When configuring an [IP range](#configuration-example) for the VM migration network, ensure that the allocated IP addresses can service the future needs of the cluster. This is important because KubeVirt pods (`virt-handler`) stop running when new nodes are added to the cluster after the VM migration network is configured, and when the required number of IPs exceeds the allocated IPs. Resolving the issue involves reconfiguring the storage network with the correct IP range.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- When configuring an [IP range](#configuration-example) for the VM migration network, ensure that the allocated IP addresses can service the future needs of the cluster. This is important because KubeVirt pods (`virt-handler`) stop running when new nodes are added to the cluster after the VM migration network is configured, and when the required number of IPs exceeds the allocated IPs. Resolving the issue involves reconfiguring the storage network with the correct IP range.
- When configuring an [IP range](#prerequisites) for the VM migration network, ensure that the allocated IP addresses can service the future needs of the cluster. This is important because KubeVirt pods (`virt-handler`) stop running when new nodes are added to the cluster after the VM migration network is configured, and when the required number of IPs exceeds the allocated IPs. Resolving the issue involves reconfiguring the storage network with the correct IP range.

Comment on lines +861 to +869
**Definition**: Segregated network for VM migration traffic.

By default, VM migration uses the management network, which is limited to a single interface and shared with cluster-wide workloads. If your implementation requires network segregation, you can use a [vm migration network](./vm-migration-network.md) to isolate VM migration in-cluster data traffic.

:::info important

Specify an IP range in the IPv4 CIDR format. The number of IPs must be equal to or large than the number of your cluster nodes.

:::
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**Definition**: Segregated network for VM migration traffic.
By default, VM migration uses the management network, which is limited to a single interface and shared with cluster-wide workloads. If your implementation requires network segregation, you can use a [vm migration network](./vm-migration-network.md) to isolate VM migration in-cluster data traffic.
:::info important
Specify an IP range in the IPv4 CIDR format. The number of IPs must be equal to or large than the number of your cluster nodes.
:::
**Definition**: Segregated network for virtual machine migration traffic.
By default, Harvester uses the built-in cluster network `mgmt` for virtual machine migration. `mgmt` is limited to a single interface and is shared with cluster-wide workloads. If your implementation requires network segregation, you can use a [VM migration network](./vm-migration-network.md) to isolate migration traffic.
:::info important
Specify an IP range in the IPv4 CIDR format. The number of IP addresses must be larger than or equal to the number of your cluster nodes.
:::

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants