-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[WFLY-19898] remove PostgreSQL internal image configuration #972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this was added as part of the k8s testing I've added @kabir as reviewer since he is the one who will know about this
maybe the Kubernetes fail means there is something else needed? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember EXACTLY how this works, but I think this init container connects to todo-backend-postgresql:5432 in a loop to ensure that the postgresql server started by the Helm chart is up and running. I think this happens before trying to install WildFly.
With this enabled it works since the DB is up and running. Without it I see there are problems in the k8s tests, presumably because we are not waiting to install WildFly until we know postgres is running (i.e. what this block fixes).
Although this might be using a different version of postgres than the image, we're only using the client for this initContainer. Once the initContainer check has passed the initContainer is thrown away.
@kabir could we perhaps move this initContainer or have an alternative that runs from somewhere in the kubernetes override dir for this quickstart? If not I guess we should keep it as it is, and perhaps I remove it "later" from the dev branch |
I believe the initContainers needs to be associated with the wildfly entry in the Helm chart. So (without much research) I believe it can't be moved |
What if we inject such content in the helm chart?
…On Tue, 26 Nov 2024 at 10:15, Kabir Khan ***@***.***> wrote:
I believe the initContainers needs to be associated with the wildfly entry
in the Helm chart. So (without much research) I believe it can't be moved
—
Reply to this email directly, view it on GitHub
<#972 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEDY6CU7XZZJUTOY3ZBJWD2CRC2BAVCNFSM6AAAAABQ2GNYKSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBQGIYDOOJXGM>
.
You are receiving this because your review was requested.Message ID:
***@***.***>
|
That might be possible by using |
Hi @kabir I trying to understand it.
|
Honestly my concern with changing this at the moment, is that we would probably also need to rework docs, and this may be too late for that, we are one week away from feature freeze |
@kstekovi @emmartins Since the wildfly image is pushed to the k8s registry manually before running Maybe something can be done in the WildFly DS config to reconnect when the DB is not up when WildFly tries to start, but that isn't my area of expertise. If not, we need to block starting WildFly until postgres is ready. Helm does not guarantee any ordering of things within the charts, so initContainers seem like the natural choice. In other words, we WANT WIldFly to wait until postgres is up and running. Here is some background reading for why I went with this approach:
I have no idea how big/small this database image is. Note it contains no data, I am just using it for the pg_isready command. Perhaps the intiContainers could instead use a busybox image and have commands to check the postgres server port via telnet or something. No idea if that is possible or not. Still busybox has some size too probably (if space saving actually is a requirement for a quickstart for users to follow....) Or why not unify that with the postgres version the Helm chart uses? Inspecting the postgres pod from kubernetes, the image is called Or perhaps the Helm chart can be split into two. So the user first installs postgres, and then installs wildfly. I don't know why this one is written as a single Helm chart, I think @ehsavoie or @jmesnil wrote the initial version. My only involvement with it has been getting it to work on Openshift and Kubernetes CI. |
yes, I agree. I will just update the version of the image to same as use the PostgreSQL helm chart. @kabir Thank you for a details.
I think this is solution can be universal for any database. The
The different types of DB are running on different ports so it should be explicitly configured where to check the connection. I don't like this.
That is also possible and could be universal for any database. But I like if can install everything by only one command. as it is now. |
9babecf
to
548350f
Compare
I think this PR could be simplified to this, I think it is more straightforward than to digging up the image version from particular bitnami chart version and trying to keep them in sync diff --git a/todo-backend/charts/values.yaml b/todo-backend/charts/values.yaml
index 882e1cd03..2b8766e48 100644
--- a/todo-backend/charts/values.yaml
+++ b/todo-backend/charts/values.yaml
@@ -3,6 +3,9 @@
# Declare variables to be passed into your templates.
postgresql:
+ image:
+ repository: bitnami/postgresql
+ tag: 17
auth:
username: todos-db
password: todos-db
@@ -46,7 +49,7 @@ wildfly:
value: "96"
initContainers:
- name: check-db-ready
- image: postgres:9.6.5
+ image: bitnami/postgresql:17
command: [ 'sh', '-c',
'until pg_isready -h todo-backend-postgresql -p 5432;
do echo waiting for database; sleep 2; done;' ] |
Did you try it? It didn't work to me when try update these value a long time ago. The PostgreSQL documentation for command pg_isready say it is not supported for 9.6. https://www.postgresql.org/docs/current/app-pg-isready.html so we have to use the old version of this command. Because I try to use new PostgreSQL image which didn't works as status checker. |
a4e85fb
to
75ebdc7
Compare
@kabir Kubernetes are not liking these changes, any clue? Maybe @kstekovi also needs to do changes at https://github.com/wildfly/quickstart/tree/main/.github/workflows/scripts/kubernetes/qs-overrides/todo-backend ? |
@emmartins Not sure. It looks like the todo-backend image (which IIRC is WIldFly) is not starting. Too me that seems like it can't connect to postgresql. Maybe the initContainer isn't doing its work property with the new image. Does it contain the Does it work for you and @kstekovi locally? |
we identified problem with updated Bitnami chart that is defaulting to request 8Gi PVC - which in our case is bigger than max allowed (1Gi). Maybe the Kubernetes used in CI has similar lower limit? https://github.com/bitnami/charts/blob/postgresql/16.2.2/bitnami/postgresql/values.yaml#L803 |
8Gi is quite a lot for any cluster, even more when you just want to run this demo app.. maybe we should explicitly set it much lower (like 100Mi or even less if it works, need to try) |
Hi @kabir The OpenShift part works. But I have some issues with the minikube and Podman. I start the minikube with the rootless privileges for the Podman. In this mode is not possible enable the repository addon. To start the minikube with privileges for the podman. I had to add my username to sudoers to use the Podman with root privileges. Now i am trying to push the "todo-backend" image to the repository. But it fails due to http server and https client. |
New update i push the image into minikube repository but unfortunately the application start ends in the same state as the CI here on github. |
https://issues.redhat.com/browse/WFLY-19898