include kubectl describe output#137
Conversation
Query pg_settings to capture current Postgres settings directly from the DB.
cluster-info returns a lot of sensitive information and it out-of-scope for the support export.
Remove the `kubectl describe crds` and replace with getting the CRDs via the correct APIs. This brings CRDs into the support export output on the same level as pods, sts, etc.
Treats PGAdmin resources separately since they are independent of a PostgresCluster. Gathers all PGAdmin resources in a namespace. This is usually one, but it could be several.
tjmoore4
left a comment
There was a problem hiding this comment.
Looks good to me. One leftover comment that could potentially be removed.
| cmd *cobra.Command, | ||
| ) error { | ||
| writeInfo(cmd, "Collecting events...") | ||
| // list, err := clientset.CoreV1().Events(namespace).List(ctx, metav1.ListOptions{}) |
There was a problem hiding this comment.
🤔 Can this comment be removed?
There was a problem hiding this comment.
Changed from 'events' to 'CRDs'.
| writeInfo(cmd, "Running kubectl describe clusterrole...") | ||
| err = runKubectlCommand(tw, cmd, clusterName+"/describe/clusterrole", "describe", "clusterrole", "postgres-operator") | ||
| if err != nil { | ||
| writeInfo(cmd, "Could not find clusterrole 'postgres-operator'. Looking for 'postgresoperator'...") | ||
| writeInfo(cmd, "Running kubectl describe clusterrole...") | ||
| err = runKubectlCommand(tw, cmd, clusterName+"/describe/clusterrole", "describe", "clusterrole", "postgresoperator") | ||
| } |
There was a problem hiding this comment.
🔧 I think we need some comment here explaining why postgres-operator and postgresoperator are being checked.
I kinda wish we could get the clusterrole by a label/annotation, but that's gonna be dependent on the installation method, so that's also not foolproof. OK, so
a) no sure way to identify our clusterrole
b) no easy way to ask the user to put in the name
c) so we do a best effort by searching for both of the common names without assuming which it will be / what sort of env we're in -- seems reasonable
There was a problem hiding this comment.
yep, this is just a best-effort to cover that majority of cases. I agree with a comment to clarify why we have this extra step. I add this comment to clusterrole and clusterrolebinding
// Resource name is generally 'postgres-operator' but in some environments
// like Openshift it could be 'postgresoperator'
| if err != nil { | ||
| writeInfo(cmd, fmt.Sprintf("Error running kubectl describe clusterrole: %s", err)) | ||
| } |
There was a problem hiding this comment.
🤔 Should this be tucked into the logic block btw 611-615? Or wait -- should we print out the err we got on 610 before overwriting it with the err on 614?
There was a problem hiding this comment.
yeah, I think there's a way to improve the flow and really show the check for 'postgresoperator' is a special case without make the overall structure too-hard-to-follow at a glance.
| writeInfo(cmd, "Collecting CRDs...") | ||
| // list, err := clientset.CoreV1().Events(namespace).List(ctx, metav1.ListOptions{}) | ||
|
|
||
| labelSelector := "app.kubernetes.io/name=pgo" |
There was a problem hiding this comment.
Is this label the same across our installers (helm/kustomize)?
There was a problem hiding this comment.
hmm....I didn't know it could be different. I will have to check.
There was a problem hiding this comment.
so this was a bit of a challenge to work around. That label is not found in a few installers. So I refactored how it filters the CRDs.
There was a problem hiding this comment.
refactor is pushed
This makes it more clear that 'postgresoperator' is a special case for certain installers.
Some installers don't label the CRD with app.kubernetes.io/name=pgo. This refactor filters all the CRDs for a name containing "postgres-operator.crunchydata.com"
No description provided.