-
Notifications
You must be signed in to change notification settings - Fork 129
Add NAT64 to enable IPv6 provision in v4 only host #1567
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add NAT64 to enable IPv6 provision in v4 only host #1567
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1abf0ce to
6122b0f
Compare
|
This PR is quite big so I can also split it into smaller PRs. The IPv6 is not ready in dev env after this, but I just want to have some feedback and discussion on how to do the DNS stuff. The bot already pinged @elfosardo but you might also want to test this on your tests, because this is quite big and introduces new moving parts. |
|
/test metal3-centos-e2e-integration-test-release-1-10 metal3-dev-env-integration-test-ubuntu-main |
6122b0f to
cfafbd0
Compare
|
/test metal3-centos-e2e-integration-test-release-1-10 metal3-dev-env-integration-test-ubuntu-main |
|
/cc @terror96 |
|
@tuminoid: GitHub didn't allow me to request PR reviews from the following users: terror96. Note that only metal3-io members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/cc @terror96 |
terror96
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comments & suggestions
tests/roles/run_tests/templates/main/cluster-template-workers-kubeadm-config-centos.yaml
Show resolved
Hide resolved
tests/roles/run_tests/templates/main/cluster-template-controlplane-kubeadm-config-centos.yaml
Show resolved
Hide resolved
tests/roles/run_tests/templates/main/cluster-template-controlplane-kubeadm-config-centos.yaml
Show resolved
Hide resolved
| spec: | ||
| controlPlaneEndpoint: | ||
| host: ${ CLUSTER_APIENDPOINT_HOST } | ||
| host: ${ CLUSTER_APIENDPOINT_IP } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an original setup where we would prefer using CLUSTER_APIENDPOINT_HOST?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is rather annoying. Using host would be usually the way to go, but in this case if we add the brackets around IPv6 address to make it look like host, kubeadm errors and the cluster does not get connection.
Also, host names are not used anywhere, so it should not be a problem to use ATM, but of course moving to use host names later would be annoying.
| shell: | ||
| cmd: "ip addr del {{ LOCAL_DNS_V6 }}/128 dev lo" | ||
| become: yes | ||
| ignore_errors: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also add IPv4 and IPv6 addresses to the tunnel device in order to enable sending of ICMP(v6) messages. It is a stupid that the need to be added manually, because they also need to be specified in the tayga configuration file. Well, life is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that would be good, but again, I feel that introducing any extra complexity at this point is not really needed.
| ipv4-addr 192.168.255.1 | ||
| prefix {{ DNS64_PREFIX }} | ||
| dynamic-pool 192.168.255.0/24 | ||
| data-dir /var/spool/tayga |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also IPv6 address for ICMPv6 traffic.
- Install Tayga as NAT64 service - Install Bind9 and configure it as local DNS server with DNS64 - Add IPv6 specific addresses and logic to cluster, controlplane, and worker templates - Update vars.md with new variables - Update CNI configuration for IPv6 Signed-off-by: Nuutti Hakala <[email protected]>
cfafbd0 to
222275a
Compare
This adds
Why we need this? Previously IPv6 support was added so that the dev env could create bare metal hosts over IPv6. This PR extends that so that we can also provision the bare metal hosts over IPv6 and the provisioned images will be IPv6 only. The tricky part is that our CI environment does not support IPv6 natively, so the VMs cannot access internet over IPv6 and hence also cannot download Kubernetes images and other needed images to set up cluster.
Introducing NAT64/DNS64 solves that issue. It essentially allows the dev env to be deployed with IPv6 only scenario on IPv4 host. Furthermore, this PR introduces required changes to the templates so that they are configured to use IPv6.
NOTE! Running
make testdoes not pass yet. However, these changes should allow IPv6 only BMH to be provisioned with operating system and creating a K8s cluster in those BMHs. Only working with centos node images.Other not directly related changes that could actually be in their own PR:
vm-setup/roles/packages_installation/files/daemon.jsontovm-setup/roles/packages_installation/templates/daemon.jsonto better reflect the purpose of the file.Other considerations
cannot parse input: [fd55::1]:5000/localimages/cluster-api-provider-metal3:main. I manually tested this and crio was able to pull images after creating a hostname in/etc/hostsand specifying that hostname instead of bare IPv6 address.CLUSTER_APIENDPOINT_HOSTintoCLUSTER_APIENDPOINT_IPin some templates, because having brackets around the IPv6 address caused errors.