-
Notifications
You must be signed in to change notification settings - Fork 497
Add dual-stack support for node-cache #657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Welcome @DockToFuture! |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: DockToFuture The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @DockToFuture. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Implementation-wise, it would be ideal to have a test for this new feature. Project-wise, let's get an agreement on #642 before merging this. I've just reopened it so that the discussion can continue. |
@DamianSawicki How is #642, which is about graceful shutdown/readiness, related to supporting dual-stack, i.e. IPv4 and IPv6? As of now Please advise. |
I'm terribly sorry, I somehow confused PR #669, which is related to Issue #642, with the present PR. Please disregard my comment. |
No problem at all. Thanks for clarifying. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#655 is closed, should this PR remain open?
If so, it would be good guard this change with a feature flag and add some tests.
{utiliptables.Table("raw"), utiliptables.ChainOutput, []string{"-p", "tcp", "-s", localIP, | ||
"--sport", c.params.HealthPort, "-j", "NOTRACK", "-m", "comment", "--comment", iptablesCommentSkipConntrack}}, | ||
}...) | ||
if utilnet.IsIPv6(net.ParseIP(localIP)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the loops for v4 and v6 should do the same, I'd suggest something like
const (
ipv4 int = iota
ipv6
)
func isIPv6ToIndex(isIPv6 bool) int {
if isIPv6 {
return ipv6
}
return ipv4
}
...
i := isIPv6ToIndex(utilnet.IsIPv6(net.ParseIP(localIP)))
c.iptablesRules[i] = = append(c.iptablesRules[i], ...)
} | ||
c.iptables = newIPTables(c.isIPv6()) | ||
c.iptablesV4 = newIPTables(iptables.ProtocolIPv4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a system call? To save resources, it would probably be better to do something like
if len(c.iptablesRules[ipv4]) > 0 {
c.iptables[ipv4] = newIPTables(iptables.ProtocolIPv4)
}
if len(c.iptablesRules[ipv6]) > 0 {
c.iptables[ipv6] = newIPTables(iptables.ProtocolIPv6)
}
@@ -180,32 +208,66 @@ func (c *CacheApp) TeardownNetworking() error { | |||
err = c.netifHandle.RemoveDummyDevice(c.params.InterfaceName) | |||
} | |||
if c.params.SetupIptables { | |||
for _, rule := range c.iptablesRules { | |||
for _, rule := range c.iptablesRulesV4 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The treatment for ipv4 and ipv6 should be identical, right? If so, I'd suggest collapsing
iptablesV4 utiliptables.Interface
iptablesV6 utiliptables.Interface
to
iptables [2]utiliptables.Interface
and similarly
iptablesRules [2][]iptablesRule
This way, we can write something like
for i := range(2) {
for _, rule := range c.iptablesRules[i] {
do something with c.iptables[i] instead of c.iptablesV4 or c.iptablesV6
}
}
Actually, instead of i := range 2
we could even write _, i := range []int{ipv4, ipv6}
.
} | ||
} | ||
return err | ||
} | ||
|
||
func (c *CacheApp) setupNetworking() { | ||
if c.params.SetupIptables { | ||
for _, rule := range c.iptablesRules { | ||
exists, err := c.iptables.EnsureRule(utiliptables.Prepend, rule.table, rule.chain, rule.args...) | ||
for _, rule := range c.iptablesRulesV4 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same remark about looping through {0,1}
here.
@@ -100,12 +99,6 @@ func parseAndValidateFlags() (*app.ConfigParams, error) { | |||
params.LocalIPs = append(params.LocalIPs, newIP) | |||
} | |||
|
|||
// validate all the IPs have the same IP family |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure the condition validated here is not assumed anywhere else? This seems to require testing. The deleted comment above func (c *CacheApp) isIPv6() bool
says
// LocalIPs are guaranteed to have the same family
@DamianSawicki is there some interest in dual-stack support for node-cache, as there were no reactions for a long time? Then I will adapt the PR. |
Thanks for a quick response @DockToFuture! From comments above and in gardener/gardener#10891 (review), it looked that @ScheererJ was interested. @Michcioperz @marqc, is dual-stack in node-cache something of interest to you? |
Add dual-stack support for node-cache