-
Notifications
You must be signed in to change notification settings - Fork 164
Data cache support for K8s 1.31 #2106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release-1.14
Are you sure you want to change the base?
Conversation
NodeUnstageVolume.
…er errors when RAIDing fails
check for VG cleanup
alot of no-op logs for customer.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: cemakd The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @cemakd. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
/retest |
return infoSlice[1], nil | ||
} | ||
|
||
func setupCaching(devicePath string, req *csi.NodeStageVolumeRequest, nodeId string) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sunnylovestiramisu I know we wanted to expand some unit tests coverage too (I don't recollect exact scenarios) Do we add that as well in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not add tests to a backport usually, if we add we should add to master first
.
Reduce new image tag to below 1.14, bump it from 1.13.2 to 1.13.3 Replace return with continue so we don't break out of infinite loop
@@ -51,5 +51,5 @@ imageTag: | |||
name: gke.gcr.io/gcp-compute-persistent-disk-csi-driver | |||
# pdImagePlaceholder in test/k8s-integration/main.go is updated automatically with the newTag | |||
newName: registry.k8s.io/cloud-provider-gcp/gcp-compute-persistent-disk-csi-driver | |||
newTag: "v1.17.2" | |||
newTag: "v1.13.3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there here just for testing, or do we want to set it back for the current stable-master?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This value was set to v1.13.2 on the release 1.14 branch per: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/release-1.14/deploy/kubernetes/images/stable-master/image.yaml
One of the PRs I cherry picked most likely set this to a higher value and that's likely why it shows the diff from 1.17.2.
On this release branch I believe it should be set to the release branch's stable master version right?
@cemakd Can you mention which PRs were backported to this branch? Changes from overlaid commits (on top of cherry-picked commits from previous PRs): https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/2106/files/f04f49634cd6134fadd9410dd4f286b4d6af947b..2929634ad1e7537419bc15055f6a0b3cb1a52f76 |
…able. Conditionally enable data cache on test node pools to fix e2e tests
if nodeName != common.TestNode { // disregard logic below when E2E testing. | ||
dataCacheLSSDCount, err := driver.GetDataCacheCountFromNodeLabel(ctx, nodeName) | ||
return dataCacheLSSDCount != 0, err | ||
} else if nodeName == common.TestNode { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pwschuurman I needed to add this node name == common.TestNode section here for e2e tests to pass. Please lmk if this is not the right approach for testing purposes.
/retest-required |
splitting to fix issues with strings.Split in case of multiple consecutive spaces
/retest-required |
@cemakd: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Adds the required support in PDCSI driver to setup caching for PVCs using local SSDs.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: