Skip to content

[WFLY-19898] remove PostgreSQL internal image configuration #972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kstekovi
Copy link
Contributor

@kstekovi kstekovi requested a review from emmartins as a code owner October 29, 2024 16:01
@ehsavoie ehsavoie requested a review from kabir October 29, 2024 16:11
Copy link
Contributor

@ehsavoie ehsavoie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this was added as part of the k8s testing I've added @kabir as reviewer since he is the one who will know about this

@emmartins
Copy link
Contributor

maybe the Kubernetes fail means there is something else needed?

Copy link
Contributor

@kabir kabir left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember EXACTLY how this works, but I think this init container connects to todo-backend-postgresql:5432 in a loop to ensure that the postgresql server started by the Helm chart is up and running. I think this happens before trying to install WildFly.

With this enabled it works since the DB is up and running. Without it I see there are problems in the k8s tests, presumably because we are not waiting to install WildFly until we know postgres is running (i.e. what this block fixes).

Although this might be using a different version of postgres than the image, we're only using the client for this initContainer. Once the initContainer check has passed the initContainer is thrown away.

@emmartins
Copy link
Contributor

emmartins commented Nov 26, 2024

@kabir could we perhaps move this initContainer or have an alternative that runs from somewhere in the kubernetes override dir for this quickstart? If not I guess we should keep it as it is, and perhaps I remove it "later" from the dev branch

@kabir
Copy link
Contributor

kabir commented Nov 26, 2024

I believe the initContainers needs to be associated with the wildfly entry in the Helm chart. So (without much research) I believe it can't be moved

@emmartins
Copy link
Contributor

emmartins commented Nov 26, 2024 via email

@kabir
Copy link
Contributor

kabir commented Nov 26, 2024

That might be possible by using deploy.initcontainers https://github.com/wildfly/wildfly-charts/blob/main/charts/wildfly/README.md

@kstekovi
Copy link
Contributor Author

Hi @kabir

I trying to understand it.

  • The Wildfly require PostgreSQL image of whole database to do a check a different instance is running? It seem to be a very weird check.
  • This initContainer is used before a Wildfly start. So that is make a startup of the wildfly much longer due to downloading the whole database.
  • Do we can configure a subsystem for datatabse to connect later when a deployment application will use this connection? Or some auto reconnect feature? Here i don't what can do a (hibernate??) subsystem for database connection.
  • Does is required a database driver for the check? That is a reason why is download a whole database?

@emmartins
Copy link
Contributor

@kstekovi FYI I think it was @jmesnil who did that initcontainers, or maybe @ehsavoie

@emmartins
Copy link
Contributor

Honestly my concern with changing this at the moment, is that we would probably also need to rework docs, and this may be too late for that, we are one week away from feature freeze

@kabir
Copy link
Contributor

kabir commented Nov 27, 2024

@kstekovi @emmartins
I added the initContainer stuff since without it (if you make sure you don't have any of the images locally in the kubernetes registry) I can reproduce the problem time by removing the initContainers entry like you have done, and running helm install every time.

Since the wildfly image is pushed to the k8s registry manually before running helm install it is available before the postgres one is downloaded. Thus WildFly tries to start first and blows up, and is exactly the reason why this PR breaks the CI.

Maybe something can be done in the WildFly DS config to reconnect when the DB is not up when WildFly tries to start, but that isn't my area of expertise.

If not, we need to block starting WildFly until postgres is ready. Helm does not guarantee any ordering of things within the charts, so initContainers seem like the natural choice. In other words, we WANT WIldFly to wait until postgres is up and running.

Here is some background reading for why I went with this approach:

I have no idea how big/small this database image is. Note it contains no data, I am just using it for the pg_isready command.

Perhaps the intiContainers could instead use a busybox image and have commands to check the postgres server port via telnet or something. No idea if that is possible or not. Still busybox has some size too probably (if space saving actually is a requirement for a quickstart for users to follow....)

Or why not unify that with the postgres version the Helm chart uses? Inspecting the postgres pod from kubernetes, the image is called docker.io/bitnami/postgresql:16.0.0-debian-11-r13. But since the Helm chart is downloading that anyway, that should not add to anything. I don't know if that version ends up changing as the Helm repository is updated or not though.

Or perhaps the Helm chart can be split into two. So the user first installs postgres, and then installs wildfly. I don't know why this one is written as a single Helm chart, I think @ehsavoie or @jmesnil wrote the initial version. My only involvement with it has been getting it to work on Openshift and Kubernetes CI.

@kstekovi
Copy link
Contributor Author

@emmartins

Honestly my concern with changing this at the moment, is that we would probably also need to rework docs, and this may be too late for that, we are one week away from feature freeze.

yes, I agree. I will just update the version of the image to same as use the PostgreSQL helm chart.

@kabir Thank you for a details.

Maybe something can be done in the WildFly DS config to reconnect when the DB is not up when WildFly tries to start, but that isn't my area of expertise.

I think this is solution can be universal for any database. The pg_isready seem to be very specific to the postgres.

Perhaps the intiContainers could instead use a busybox image and have commands to check the postgres server port via telnet or something. No idea if that is possible or not. Still busybox has some size too probably (if space saving actually is a requirement for a quickstart for users to follow....)

The different types of DB are running on different ports so it should be explicitly configured where to check the connection. I don't like this.

Or perhaps the Helm chart can be split into two. So the user first installs postgres, and then installs wildfly. I don't know why this one is written as a single Helm chart, I think @ehsavoie or @jmesnil wrote the initial version. My only involvement with it has been getting it to work on Openshift and Kubernetes CI.

That is also possible and could be universal for any database. But I like if can install everything by only one command. as it is now.

@kstekovi kstekovi force-pushed the WFLY-19898 branch 5 times, most recently from 9babecf to 548350f Compare November 28, 2024 13:00
@jbliznak
Copy link
Contributor

jbliznak commented May 31, 2025

I think this PR could be simplified to this, I think it is more straightforward than to digging up the image version from particular bitnami chart version and trying to keep them in sync

diff --git a/todo-backend/charts/values.yaml b/todo-backend/charts/values.yaml
index 882e1cd03..2b8766e48 100644
--- a/todo-backend/charts/values.yaml
+++ b/todo-backend/charts/values.yaml
@@ -3,6 +3,9 @@
 # Declare variables to be passed into your templates.
 
 postgresql:
+  image:
+    repository: bitnami/postgresql
+    tag: 17
   auth:
     username: todos-db
     password: todos-db
@@ -46,7 +49,7 @@ wildfly:
           value: "96"
       initContainers:
         - name: check-db-ready
-          image: postgres:9.6.5
+          image: bitnami/postgresql:17
           command: [ 'sh', '-c',
               'until pg_isready -h todo-backend-postgresql -p 5432; 
                     do echo waiting for database; sleep 2; done;' ]

@kstekovi
Copy link
Contributor Author

kstekovi commented Jun 2, 2025

I think this PR could be simplified to this, I think it is more straightforward than to digging up the image version from particular bitnami chart version and trying to keep them in sync

diff --git a/todo-backend/charts/values.yaml b/todo-backend/charts/values.yaml
index 882e1cd03..2b8766e48 100644
--- a/todo-backend/charts/values.yaml
+++ b/todo-backend/charts/values.yaml
@@ -3,6 +3,9 @@
 # Declare variables to be passed into your templates.
 
 postgresql:
+  image:
+    repository: bitnami/postgresql
+    tag: 17
   auth:
     username: todos-db
     password: todos-db
@@ -46,7 +49,7 @@ wildfly:
           value: "96"
       initContainers:
         - name: check-db-ready
-          image: postgres:9.6.5
+          image: bitnami/postgresql:17
           command: [ 'sh', '-c',
               'until pg_isready -h todo-backend-postgresql -p 5432; 
                     do echo waiting for database; sleep 2; done;' ]

Did you try it? It didn't work to me when try update these value a long time ago.

The PostgreSQL documentation for command pg_isready say it is not supported for 9.6. https://www.postgresql.org/docs/current/app-pg-isready.html so we have to use the old version of this command. Because I try to use new PostgreSQL image which didn't works as status checker.

@kstekovi kstekovi force-pushed the WFLY-19898 branch 2 times, most recently from a4e85fb to 75ebdc7 Compare June 2, 2025 15:03
@emmartins
Copy link
Contributor

@kstekovi please update this PR taking @jbliznak suggestion into account

@emmartins
Copy link
Contributor

emmartins commented Jun 16, 2025

@kabir Kubernetes are not liking these changes, any clue? Maybe @kstekovi also needs to do changes at https://github.com/wildfly/quickstart/tree/main/.github/workflows/scripts/kubernetes/qs-overrides/todo-backend ?

@kabir
Copy link
Contributor

kabir commented Jun 16, 2025

@emmartins Not sure. It looks like the todo-backend image (which IIRC is WIldFly) is not starting. Too me that seems like it can't connect to postgresql. Maybe the initContainer isn't doing its work property with the new image. Does it contain the pg_isready command the initcontainer is trying to run?

Does it work for you and @kstekovi locally?

@jbliznak
Copy link
Contributor

jbliznak commented Jun 16, 2025

we identified problem with updated Bitnami chart that is defaulting to request 8Gi PVC - which in our case is bigger than max allowed (1Gi). Maybe the Kubernetes used in CI has similar lower limit?

https://github.com/bitnami/charts/blob/postgresql/16.2.2/bitnami/postgresql/values.yaml#L803

@jbliznak
Copy link
Contributor

8Gi is quite a lot for any cluster, even more when you just want to run this demo app.. maybe we should explicitly set it much lower (like 100Mi or even less if it works, need to try)

@kstekovi
Copy link
Contributor Author

Hi @kabir The OpenShift part works. But I have some issues with the minikube and Podman.

I start the minikube with the rootless privileges for the Podman. In this mode is not possible enable the repository addon.
The issue is described here: kubernetes/minikube#20724

To start the minikube with privileges for the podman. I had to add my username to sudoers to use the Podman with root privileges.

Now i am trying to push the "todo-backend" image to the repository. But it fails due to http server and https client.

@kstekovi
Copy link
Contributor Author

New update i push the image into minikube repository but unfortunately the application start ends in the same state as the CI here on github.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants