Skip to content

feat: add vm-migration-network setting #811

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 38 additions & 14 deletions docs/advanced/settings.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ For more information, see the **Certificate Rotation** section of the [Rancher](

### `backup-target`

**Definition**: Custom backup target used to store VM backups.
**Definition**: Custom backup target used to store VM backups.

For more information, see the [Longhorn documentation](https://longhorn.io/docs/1.6.0/snapshots-and-backups/backup-and-restore/set-backup-target/#set-up-aws-s3-backupstore).

Expand Down Expand Up @@ -122,7 +122,7 @@ https://172.16.0.1/v3/import/w6tp7dgwjj549l88pr7xmxb4x6m54v5kcplvhbp9vv2wzqrrjhr

### `containerd-registry`

**Definition**: Configuration of a private registry created for the Harvester cluster.
**Definition**: Configuration of a private registry created for the Harvester cluster.

The value is stored in the `registries.yaml` file of each node (path: `/etc/rancher/rke2/registries.yaml`). For more information, see [Containerd Registry Configuration](https://docs.rke2.io/install/private_registry) in the RKE2 documentation.

Expand Down Expand Up @@ -205,7 +205,7 @@ Changing this setting might cause single-node clusters to temporarily become una
- Proxy URL for HTTPS requests: `"httpsProxy": "https://<username>:<pswd>@<ip>:<port>"`
- Comma-separated list of hostnames and/or CIDRs: `"noProxy": "<hostname | CIDR>"`

You must specify key information in the `noProxy` field if you configured the following options or settings:
You must specify key information in the `noProxy` field if you configured the following options or settings:

| Configured option/setting | Required value in `noProxy` | Reason |
| --- | --- | --- |
Expand Down Expand Up @@ -252,7 +252,7 @@ debug

**Definition**: Setting that enables and disables the Longhorn V2 Data Engine.

When set to `true`, Harvester automatically loads the kernel modules required by the Longhorn V2 Data Engine, and attempts to allocate 1024 × 2 MiB-sized huge pages (for example, 2 GiB of RAM) on all nodes.
When set to `true`, Harvester automatically loads the kernel modules required by the Longhorn V2 Data Engine, and attempts to allocate 1024 × 2 MiB-sized huge pages (for example, 2 GiB of RAM) on all nodes.

Changing this setting automatically restarts RKE2 on all nodes but does not affect running virtual machine workloads.

Expand All @@ -261,7 +261,7 @@ Changing this setting automatically restarts RKE2 on all nodes but does not affe
If you encounter error messages that include the phrase "not enough hugepages-2Mi capacity", allow some time for the error to be resolved. If the error persists, reboot the affected nodes.

To disable the Longhorn V2 Data Engine on specific nodes (for example, nodes with less processing and memory resources), go to the **Hosts** screen and add the following label to the target nodes:

- label: `node.longhorn.io/disable-v2-data-engine`
- value: `true`

Expand Down Expand Up @@ -306,7 +306,7 @@ Changes to the server address list are applied to all nodes.

**Definition**: Percentage of physical compute, memory, and storage resources that can be allocated for VM use.

Overcommitting is used to optimize physical resource allocation, particularly when VMs are not expected to fully consume the allocated resources most of the time. Setting values greater than 100% allows scheduling of multiple VMs even when physical resources are notionally fully allocated.
Overcommitting is used to optimize physical resource allocation, particularly when VMs are not expected to fully consume the allocated resources most of the time. Setting values greater than 100% allows scheduling of multiple VMs even when physical resources are notionally fully allocated.

**Default values**: `{ "cpu":1600, "memory":150, "storage":200 }`

Expand Down Expand Up @@ -515,7 +515,7 @@ If you misconfigure this setting and are unable to access the Harvester UI and A

**Supported options and values**:

- `protocols`: Enabled protocols.
- `protocols`: Enabled protocols.
- `ciphers`: Enabled ciphers.

For more information about the supported options, see [`ssl-protocols`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-protocols) and [`ssl-ciphers`](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers) in the Ingress-Nginx Controller documentation.
Expand Down Expand Up @@ -686,7 +686,7 @@ When the cluster is upgraded in the future, the contents of the `value` field ma

**Versions**: v1.2.0 and later

**Definition**: Additional namespaces that you can use when [generating a support bundle](../troubleshooting/harvester.md#generate-a-support-bundle).
**Definition**: Additional namespaces that you can use when [generating a support bundle](../troubleshooting/harvester.md#generate-a-support-bundle).

By default, the support bundle only collects resources from the following predefined namespaces:

Expand Down Expand Up @@ -729,7 +729,7 @@ You can specify a value greater than or equal to 0. When the value is 0, Harvest

**Versions**: v1.3.1 and later

**Definition**: Number of minutes Harvester allows for collection of logs and configurations (Harvester) on the nodes for the support bundle.
**Definition**: Number of minutes Harvester allows for collection of logs and configurations (Harvester) on the nodes for the support bundle.

If the collection process is not completed within the allotted time, Harvester still allows you to download the support bundle (without the uncollected data). You can specify a value greater than or equal to 0. When the value is 0, Harvester uses the default value.

Expand Down Expand Up @@ -770,7 +770,7 @@ https://your.upgrade.checker-url/v99/checkupgrade
**Supported options and fields**:

- `imagePreloadOption`: Options for the image preloading phase.

The full ISO contains the core operating system components and all required container images. Harvester can preload these container images to each node during installation and upgrades. When workloads are scheduled to management and worker nodes, the container images are ready to use.

- `strategy`: Image preload strategy.
Expand All @@ -786,10 +786,10 @@ https://your.upgrade.checker-url/v99/checkupgrade
If you decide to use `skip`, ensure that the following requirements are met:

- You have a private container registry that contains all required images.
- Your cluster has high-speed internet access and is able to pull all images from Docker Hub when necessary.
- Your cluster has high-speed internet access and is able to pull all images from Docker Hub when necessary.

Note any potential internet service interruptions and how close you are to reaching your [Docker Hub rate limit](https://www.docker.com/increase-rate-limits/). Failure to download any of the required images may cause the upgrade to fail and may leave the cluster in a middle state.

:::

- `parallel` (**experimental**): Nodes preload images in batches. You can adjust this using the `concurrency` option.
Expand Down Expand Up @@ -839,7 +839,7 @@ https://your.upgrade.checker-url/v99/checkupgrade

### `vm-force-reset-policy`

**Definition**: Setting that allows you to force rescheduling of a VM when the node that it is running on becomes unavailable.
**Definition**: Setting that allows you to force rescheduling of a VM when the node that it is running on becomes unavailable.

When the state of the node changes to `Not Ready`, the VM is force deleted and rescheduled to an available node after the configured number of seconds.

Expand All @@ -856,6 +856,30 @@ When the node becomes unavailable or is powered off, the VM only restarts and do
}
```

### `vm-migration-network`

**Definition**: Segregated network for VM migration traffic.

By default, VM migration uses the management network, which is limited to a single interface and shared with cluster-wide workloads. If your implementation requires network segregation, you can use a [vm migration network](./vm-migration-network.md) to isolate VM migration in-cluster data traffic.

:::info important

Specify an IP range in the IPv4 CIDR format. The number of IPs must be equal to or large than the number of your cluster nodes.

:::
Comment on lines +861 to +869
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
**Definition**: Segregated network for VM migration traffic.
By default, VM migration uses the management network, which is limited to a single interface and shared with cluster-wide workloads. If your implementation requires network segregation, you can use a [vm migration network](./vm-migration-network.md) to isolate VM migration in-cluster data traffic.
:::info important
Specify an IP range in the IPv4 CIDR format. The number of IPs must be equal to or large than the number of your cluster nodes.
:::
**Definition**: Segregated network for virtual machine migration traffic.
By default, Harvester uses the built-in cluster network `mgmt` for virtual machine migration. `mgmt` is limited to a single interface and is shared with cluster-wide workloads. If your implementation requires network segregation, you can use a [VM migration network](./vm-migration-network.md) to isolate migration traffic.
:::info important
Specify an IP range in the IPv4 CIDR format. The number of IP addresses must be larger than or equal to the number of your cluster nodes.
:::


**Default value**: ""

**Example**:

```
{
"vlan": 100,
"clusterNetwork": "vm-migration",
"range": "192.168.1.0/24"
}
```

### `volume-snapshot-class`

**Definition**: VolumeSnapshotClassName for the VolumeSnapshot and VolumeSnapshotContent when restoring a VM to a namespace that does not contain the source VM.
Expand Down
230 changes: 230 additions & 0 deletions docs/advanced/vm-migration-network.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,230 @@
---
sidebar_position: 12
sidebar_label: VM Migration Network
title: "VM Migration Network"
---

If the user wishes to isolate VM migration traffic from the Kubernetes cluster network (i.e. the management network) or other cluster-wide workloads. Users can allocate a dedicated vm migration network to get better network bandwidth and performance.

:::note

- Avoid configuring KubeVirt configuration directly, as this can result in unexpected or unwanted system behavior.

:::
Comment on lines +7 to +13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If the user wishes to isolate VM migration traffic from the Kubernetes cluster network (i.e. the management network) or other cluster-wide workloads. Users can allocate a dedicated vm migration network to get better network bandwidth and performance.
:::note
- Avoid configuring KubeVirt configuration directly, as this can result in unexpected or unwanted system behavior.
:::
A VM migration network is useful for isolating migration traffic from cluster traffic on `mgmt` and other cluster-wide workloads. Using a VM migration network results in better network bandwidth and performance.
:::note
Avoid configuring KubeVirt settings directly, as this can result in unexpected or unwanted system behavior.
:::


## Prerequisites

There are some prerequisites before configuring the Harvester VM Migration Network setting.

- Well-configured Cluster Network and VLAN Config.
- Users have to ensure the Cluster Network is configured and VLAN Config will cover all nodes and ensure the network connectivity is working and expected in all nodes.
- No VM Migration in progress before configuring the VM Migration Network setting.

:::caution

If the Harvester cluster was upgraded from v1.0.3, please check if Whereabouts CNI is installed properly before you move on to the next step. We will always recommend following this guide to check. [Issue 3168](https://github.com/harvester/harvester/issues/3168) describes that the Harvester cluster will not always install Whereabouts CNI properly.

- Verify the `ippools.whereabouts.cni.cncf.io` CRD exists with the following command.
- `kubectl get crd ippools.whereabouts.cni.cncf.io`

:::
Comment on lines +15 to +30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Prerequisites
There are some prerequisites before configuring the Harvester VM Migration Network setting.
- Well-configured Cluster Network and VLAN Config.
- Users have to ensure the Cluster Network is configured and VLAN Config will cover all nodes and ensure the network connectivity is working and expected in all nodes.
- No VM Migration in progress before configuring the VM Migration Network setting.
:::caution
If the Harvester cluster was upgraded from v1.0.3, please check if Whereabouts CNI is installed properly before you move on to the next step. We will always recommend following this guide to check. [Issue 3168](https://github.com/harvester/harvester/issues/3168) describes that the Harvester cluster will not always install Whereabouts CNI properly.
- Verify the `ippools.whereabouts.cni.cncf.io` CRD exists with the following command.
- `kubectl get crd ippools.whereabouts.cni.cncf.io`
:::
## Prerequisites
Before you begin configuring the VM migration network, ensure that the following requirements are met:
- The network switches are correctly configured, and a dedicated VLAN ID is assigned to the VM migration network.
- The [cluster network](../networking/clusternetwork.md) and [VLAN network](../networking/harvester-network.md) are configured correctly. Ensure that both networks cover all nodes and are accessible.
- No virtual machines are being migrated.
- The `ippools.whereabouts.cni.cncf.io` CRD exists. You can check this using the command `kubectl get crd ippools.whereabouts.cni.cncf.io`. In certain [upgrade scenarios](https://github.com/harvester/harvester/issues/3168), the Whereabouts CNI is not installed correctly.


## Configuration Example

- VLAN ID
- Please check with your network switch setting, and provide a dedicated VLAN ID for VM Migration Network.
- Well-configured Cluster Network and VLAN Config
- Please refer Networking page for more details and configure [Cluster Network](../networking/clusternetwork.md) and [VLAN Config](../networking/harvester-network.md).
- IP range for VM Migration Network
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format.
- Exclude IP addresses that KubeVirt pods and the VM migration network must not use.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wrong indent?
What IP addresses should KubeVirt not use?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +32 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Configuration Example
- VLAN ID
- Please check with your network switch setting, and provide a dedicated VLAN ID for VM Migration Network.
- Well-configured Cluster Network and VLAN Config
- Please refer Networking page for more details and configure [Cluster Network](../networking/clusternetwork.md) and [VLAN Config](../networking/harvester-network.md).
- IP range for VM Migration Network
- IP range should not conflict or overlap with Kubernetes cluster networks(`10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16` are reserved).
- IP range should be in IPv4 CIDR format.
- Exclude IP addresses that KubeVirt pods and the VM migration network must not use.
- The IP range of the VM migration network is in the IPv4 CIDR format and must neither conflict nor overlap with Kubernetes cluster networks. You must exclude IP addresses that KubeVirt pods and the VM migration network must not use. The following addresses are reserved: `10.42.0.0/16`, `10.43.0.0/16`, `10.52.0.0/16` and `10.53.0.0/16`.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The information must be part of the "Prerequisites" section. I will reorganize the "Storage Network" page after we merge this PR.


We will take the following configuration as an example to explain the details of the VM Migration Network

- VLAN ID for VM Migration Network: `100`
- Cluster Network: `vm-migration`
- IP range: `192.168.1.0/24`
- Exclude Address: `192.168.1.1/32`
Comment on lines +43 to +48
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove lines 43 to 48. The information is already mentioned in the CLI section, where it is relevant.


### Harvester VM Migration Network Setting

The [`vm-migration-network` setting](./settings.md#vm-migration-network) allows you to configure the network used to isolate in-cluster VM migration traffic when segregation is required.
Comment on lines +50 to +52
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Harvester VM Migration Network Setting
The [`vm-migration-network` setting](./settings.md#vm-migration-network) allows you to configure the network used to isolate in-cluster VM migration traffic when segregation is required.
### `vm-migration-network` Setting
The [`vm-migration-network`](./settings.md#vm-migration-network) setting allows you to configure the network used to isolate in-cluster VM migration traffic when segregation is required.


You can [enable](#enable-the-vm-migration-network) and [disable](#disable-the-vm-migration-network) the VM migration network using either the UI or the CLI. When the setting is enabled, you must construct a Multus `NetworkAttachmentDefinition` CRD by configuring certain fields.

#### Web UI

:::tip

Using the Harvester UI to configure the `vm-migration-network` setting is strongly recommended.

:::

##### Enable the VM Migration Network

1. Go to **Advanced > Settings > vm-migration-network**.

1. Select **Enabled**.

1. Configure the **VLAN ID**, **Cluster Network**, **IP Range**, and **Exclude** fields to construct a Multus `NetworkAttachmentDefinition` CRD.

1. Click **Save**.

![storage-network-enabled.png](/img/v1.4/storagenetwork/storage-network-enabled.png)

##### Disable the VM Migration Network

1. Go to **Advanced > Settings > vm-migration-network**.

1. Select **Disabled**.

1. Click **Save**.

Once the VM migration network is disabled, KubeVirt starts using the mgmt network for VM migration related operations.

![storage-network-disabled.png](/img/v1.4/storagenetwork/storage-network-disabled.png)
Comment on lines +56 to +86
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Web UI
:::tip
Using the Harvester UI to configure the `vm-migration-network` setting is strongly recommended.
:::
##### Enable the VM Migration Network
1. Go to **Advanced > Settings > vm-migration-network**.
1. Select **Enabled**.
1. Configure the **VLAN ID**, **Cluster Network**, **IP Range**, and **Exclude** fields to construct a Multus `NetworkAttachmentDefinition` CRD.
1. Click **Save**.
![storage-network-enabled.png](/img/v1.4/storagenetwork/storage-network-enabled.png)
##### Disable the VM Migration Network
1. Go to **Advanced > Settings > vm-migration-network**.
1. Select **Disabled**.
1. Click **Save**.
Once the VM migration network is disabled, KubeVirt starts using the mgmt network for VM migration related operations.
![storage-network-disabled.png](/img/v1.4/storagenetwork/storage-network-disabled.png)
<Tabs>
<TabItem value="ui" label="UI" default>
:::tip
Using the Harvester UI to configure the `vm-migration-network` setting is strongly recommended.
:::
#### Enable the VM Migration Network
1. Go to **Advanced > Settings > vm-migration-network**.
1. Select **Enabled**.
1. Configure the **VLAN ID**, **Cluster Network**, **IP Range**, and **Exclude** fields to construct a Multus `NetworkAttachmentDefinition` CRD.
1. Click **Save**.
![storage-network-enabled.png](/img/v1.4/storagenetwork/storage-network-enabled.png)
#### Disable the VM Migration Network
1. Go to **Advanced > Settings > vm-migration-network**.
1. Select **Disabled**.
1. Click **Save**.
Once the VM migration network is disabled, KubeVirt starts using `mgmt` for VM migration-related operations.
![storage-network-disabled.png](/img/v1.4/storagenetwork/storage-network-disabled.png)
</TabItem>

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use tabs to reduce the number of headings and improve the reading experience.


#### CLI

You can use the following command to configure the [`vm-migration-network` setting](./settings.md#vm-migration-network).

```bash
kubectl edit settings.harvesterhci.io vm-migration-network
```

The value format is JSON string or empty string as shown in below:

```json
{
"vlan": 100,
"clusterNetwork": "vm-migration",
"range": "192.168.1.0/24",
"exclude":[
"192.168.1.100/32"
]
}
```

The full configuration is like this example:

```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
value: '{"vlan":100,"clusterNetwork":"vm-migration","range":"192.168.1.0/24", "exclude":["192.168.1.100/32"]}'
```
Comment on lines +88 to +117
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### CLI
You can use the following command to configure the [`vm-migration-network` setting](./settings.md#vm-migration-network).
```bash
kubectl edit settings.harvesterhci.io vm-migration-network
```
The value format is JSON string or empty string as shown in below:
```json
{
"vlan": 100,
"clusterNetwork": "vm-migration",
"range": "192.168.1.0/24",
"exclude":[
"192.168.1.100/32"
]
}
```
The full configuration is like this example:
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
value: '{"vlan":100,"clusterNetwork":"vm-migration","range":"192.168.1.0/24", "exclude":["192.168.1.100/32"]}'
```
<TabItem value="cli" label="CLI">
You can use the following command to configure the [`vm-migration-network`](./settings.md#vm-migration-network) setting.
```bash
kubectl edit settings.harvesterhci.io vm-migration-network
```
The VM migration network is automatically enabled in the following situations:
- The value field contains a valid JSON string.
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
value: '{"vlan":100,"clusterNetwork":"vm-migration","range":"192.168.1.0/24", "exclude":["192.168.1.100/32"]}'
```
- The value field is empty.
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
value: ''
```


When the VM migration network is disabled, the full configuration is as follows:

```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
```

:::caution

Harvester considers extra insignificant characters in a JSON string as a different configuration.

Specifying a valid value in the `value` field enables the storage network. Deleting the `value` field disables the storage network.

:::
Comment on lines +119 to +134
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When the VM migration network is disabled, the full configuration is as follows:
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
```
:::caution
Harvester considers extra insignificant characters in a JSON string as a different configuration.
Specifying a valid value in the `value` field enables the storage network. Deleting the `value` field disables the storage network.
:::
The storage network is disabled when you remove the value field.
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
name: vm-migration-network
```
:::caution
Harvester considers extra insignificant characters in a JSON string as a different configuration.
:::
</TabItem>
</Tabs>


### After Applying Harvester VM Migration Network Setting

Harvester will create a new NetworkAttachmentDefinition and update the KubeVirt configuration.

Once the KubeVirt configuration is updated, KubeVirt will restart all `virt-handler` pods to apply the new network configuration.
Comment on lines +136 to +140
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### After Applying Harvester VM Migration Network Setting
Harvester will create a new NetworkAttachmentDefinition and update the KubeVirt configuration.
Once the KubeVirt configuration is updated, KubeVirt will restart all `virt-handler` pods to apply the new network configuration.
The following occur once the `vm-migration-network` setting is applied:
- Harvester creates a new `NetworkAttachmentDefinition` and updates the KubeVirt configuration.
- KubeVirt restarts all `virt-handler` pods to apply the new network configuration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The information is part of the previous section. Also, the heading is not necessary because I reorganized this section.


### Verify Configuration is Completed

#### Step 1

Check if Harvester VM Migration Network setting's status is `True` and the type is `configured`.

```bash
kubectl get settings.harvesterhci.io vm-migration-network -o yaml
```

Completed Setting Example:

```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
annotations:
vm-migration-network.settings.harvesterhci.io/hash: ec8322fb6b741f94739cbb904fc73c3fda864d6d
vm-migration-network.settings.harvesterhci.io/net-attach-def: harvester-system/vm-migration-network-6flk7
creationTimestamp: "2022-10-13T06:36:39Z"
generation: 51
name: storage-network
resourceVersion: "154638"
uid: 2233ad63-ee52-45f6-a79c-147e48fc88db
status:
conditions:
- lastUpdateTime: "2022-10-13T13:05:17Z"
reason: Completed
status: "True"
type: configured
```
Comment on lines +142 to +172
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Verify Configuration is Completed
#### Step 1
Check if Harvester VM Migration Network setting's status is `True` and the type is `configured`.
```bash
kubectl get settings.harvesterhci.io vm-migration-network -o yaml
```
Completed Setting Example:
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
annotations:
vm-migration-network.settings.harvesterhci.io/hash: ec8322fb6b741f94739cbb904fc73c3fda864d6d
vm-migration-network.settings.harvesterhci.io/net-attach-def: harvester-system/vm-migration-network-6flk7
creationTimestamp: "2022-10-13T06:36:39Z"
generation: 51
name: storage-network
resourceVersion: "154638"
uid: 2233ad63-ee52-45f6-a79c-147e48fc88db
status:
conditions:
- lastUpdateTime: "2022-10-13T13:05:17Z"
reason: Completed
status: "True"
type: configured
```
### Post-Configuration Steps
1. Verify that the setting's status is `True` and the type is `configured` using the following command:
```bash
kubectl get settings.harvesterhci.io vm-migration-network -o yaml
```
Example:
```yaml
apiVersion: harvesterhci.io/v1beta1
kind: Setting
metadata:
annotations:
vm-migration-network.settings.harvesterhci.io/hash: ec8322fb6b741f94739cbb904fc73c3fda864d6d
vm-migration-network.settings.harvesterhci.io/net-attach-def: harvester-system/vm-migration-network-6flk7
creationTimestamp: "2022-10-13T06:36:39Z"
generation: 51
name: storage-network
resourceVersion: "154638"
uid: 2233ad63-ee52-45f6-a79c-147e48fc88db
status:
conditions:
- lastUpdateTime: "2022-10-13T13:05:17Z"
reason: Completed
status: "True"
type: configured
```


#### Step 2

Verify the readiness of all KubeVirt `virt-handler` pods, and confirm that their networks are correctly configured.

Execute the following command to inspect a pod's details:

```bash
kubectl -n harvester-system describe pod <pod-name>
```
Comment on lines +174 to +182
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Step 2
Verify the readiness of all KubeVirt `virt-handler` pods, and confirm that their networks are correctly configured.
Execute the following command to inspect a pod's details:
```bash
kubectl -n harvester-system describe pod <pod-name>
```
1. Verify that all KubeVirt `virt-handler` pods are ready and that their networks are correctly configured.
You can inspect pod details using the following command:
```bash
kubectl -n harvester-system describe pod <pod-name>
```


#### Step 3

Check the `k8s.v1.cni.cncf.io/network-status` annotations and ensure that an interface named `migration0` exists, with an IP address within the designated IP range.

Users could use the following command to show all `virt-handler` pods to verify.

```bash
kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o yaml
```

Correct Network Example:

```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 004522bc8468ea707038b43813cce2fba144f0e97551d2d358808d57caf7b543
cni.projectcalico.org/podIP: 10.52.2.122/32
cni.projectcalico.org/podIPs: 10.52.2.122/32
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "k8s-pod-network",
"ips": [
"10.52.2.122"
],
"default": true,
"dns": {}
},{
"name": "harvester-system/vm-migration-network-6flk7",
"interface": "migration0",
"ips": [
"10.1.2.1"
],
"mac": "c6:30:6f:02:52:3e",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: vm-migration-network-6flk7@migration0

Omitted...
```
Comment on lines +184 to +224
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Step 3
Check the `k8s.v1.cni.cncf.io/network-status` annotations and ensure that an interface named `migration0` exists, with an IP address within the designated IP range.
Users could use the following command to show all `virt-handler` pods to verify.
```bash
kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o yaml
```
Correct Network Example:
```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 004522bc8468ea707038b43813cce2fba144f0e97551d2d358808d57caf7b543
cni.projectcalico.org/podIP: 10.52.2.122/32
cni.projectcalico.org/podIPs: 10.52.2.122/32
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "k8s-pod-network",
"ips": [
"10.52.2.122"
],
"default": true,
"dns": {}
},{
"name": "harvester-system/vm-migration-network-6flk7",
"interface": "migration0",
"ips": [
"10.1.2.1"
],
"mac": "c6:30:6f:02:52:3e",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: vm-migration-network-6flk7@migration0
Omitted...
```
1. Check the `k8s.v1.cni.cncf.io/network-status` annotations and verify that an interface named `migration0` exists. The IP address of this interface must be within the designated IP range.
You can retrieve a list of `virt-handler` pods using the following command:
```bash
kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o yaml
```
Example:
```yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 004522bc8468ea707038b43813cce2fba144f0e97551d2d358808d57caf7b543
cni.projectcalico.org/podIP: 10.52.2.122/32
cni.projectcalico.org/podIPs: 10.52.2.122/32
k8s.v1.cni.cncf.io/network-status: |-
[{
"name": "k8s-pod-network",
"ips": [
"10.52.2.122"
],
"default": true,
"dns": {}
},{
"name": "harvester-system/vm-migration-network-6flk7",
"interface": "migration0",
"ips": [
"10.1.2.1"
],
"mac": "c6:30:6f:02:52:3e",
"dns": {}
}]
k8s.v1.cni.cncf.io/networks: vm-migration-network-6flk7@migration0
Omitted...
```


## Best Practices

- When configuring an [IP range](#configuration-example) for the VM migration network, ensure that the allocated IP addresses can service the future needs of the cluster. This is important because KubeVirt pods (`virt-handler`) stop running when new nodes are added to the cluster after the VM migration network is configured, and when the required number of IPs exceeds the allocated IPs. Resolving the issue involves reconfiguring the storage network with the correct IP range.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- When configuring an [IP range](#configuration-example) for the VM migration network, ensure that the allocated IP addresses can service the future needs of the cluster. This is important because KubeVirt pods (`virt-handler`) stop running when new nodes are added to the cluster after the VM migration network is configured, and when the required number of IPs exceeds the allocated IPs. Resolving the issue involves reconfiguring the storage network with the correct IP range.
- When configuring an [IP range](#prerequisites) for the VM migration network, ensure that the allocated IP addresses can service the future needs of the cluster. This is important because KubeVirt pods (`virt-handler`) stop running when new nodes are added to the cluster after the VM migration network is configured, and when the required number of IPs exceeds the allocated IPs. Resolving the issue involves reconfiguring the storage network with the correct IP range.


- Configure the VM migration network on a non-`mgmt` cluster network to ensure complete separation of the VM migration traffic from the Kubernetes control plane traffic. Using `mgmt` is possible but not recommended because of the negative impact (resource and bandwidth contention) on the control plane network performance. Use `mgmt` only if your cluster has NIC-related constraints and if you can completely segregate the traffic.