-
Notifications
You must be signed in to change notification settings - Fork 122
Support scraping metrics from target running with TLS #1190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Welcome @pierDipi! |
Hi @pierDipi. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
84a7c4d
to
5d5825c
Compare
vLLM server can run with TLS and metrics scraping doesn't work currently in that case. Signed-off-by: Pierangelo Di Pilato <[email protected]>
@@ -136,7 +138,9 @@ var ( | |||
|
|||
modelServerMetricsPort = flag.Int("model-server-metrics-port", 0, "Port to scrape metrics from pods. "+ | |||
"Default value will be set to InferencePool.Spec.TargetPortNumber if not set.") | |||
modelServerMetricsPath = flag.String("model-server-metrics-path", "/metrics", "Path to scrape metrics from pods") | |||
modelServerMetricsPath = flag.String("model-server-metrics-path", "/metrics", "Path to scrape metrics from pods") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Do you mind adding these flags (with the same defaults) to the helm chart? Example:
gateway-api-inference-extension/config/charts/inferencepool/templates/epp-deployment.yaml
Line 41 in 6cf8f31
- "--enable-pprof={{ .Values.inferenceExtension.enablePprof }}" |
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback Kellen, done!
Looks great! Just adding a request to update helm also. Thanks! |
Signed-off-by: Pierangelo Di Pilato <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code wise - LGTM.
Not clear how the CA for the metrics endpoint certificate is added to the HTTPS client configuration in GIE.
metricsHttpClient = &http.Client{ | ||
Transport: &http.Transport{ | ||
TLSClientConfig: &tls.Config{ | ||
InsecureSkipVerify: *modelServerMetricsHttpsInsecureSkipVerify, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how is the CA added to the TLS context? Is it intended to be added to the default certificate roots in the image?
Without adding the CA to the allowed list - won't you be forced to always use insecure-skip-verify set to true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, we should actually remove the flag for now and set this to true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Co-authored-by: Abdullah Gharaibeh <[email protected]>
Co-authored-by: Abdullah Gharaibeh <[email protected]>
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kfswain, pierDipi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
vLLM server can run with TLS and metrics scraping doesn't work currently in that case.
Support setting
--model-server-metrics-scheme=http|https
and--model-server-metrics-https-insecure-skip-verify=false|true
.The change is backward compatible.
Fixes #1189