-
Notifications
You must be signed in to change notification settings - Fork 25
Decrease memory footprint of JVM-based apps by setting MALLOC_ARENA_MAX #316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decrease memory footprint of JVM-based apps by setting MALLOC_ARENA_MAX #316
Conversation
17bfb9a
to
78db949
Compare
@giner thanks for the contribution! We're currently planning to merge this after the upgrade to Canton 3.3/splice 0.4 just to avoid coupling too many changes to the HDM as that one already introduces some risk. |
Sounds good |
@moritzkiefer-da friendly ping |
@giner thanks for the ping, I suggest we wait until mainnet is actually on 0.4.x. There is still a chance we end up directly upgrading to 0.4.1 there or something like that depending on what issues find in 0.4.0. |
/cluster_test |
Deploy cluster test triggered for Commit 78db94934e52c30ca168b38e6123e93ecfc0003b in , please contact a Contributor to approve it in CircleCI: https://app.circleci.com/pipelines/github/DACH-NY/canton-network-internal/16667 |
@giner could you rebase this please |
78db949
to
bb4853d
Compare
rebased |
/cluster_test |
Deploy cluster test triggered for Commit bb4853d5df536336c942fd4747d0863061c613b7 in , please contact a Contributor to approve it in CircleCI: https://app.circleci.com/pipelines/github/DACH-NY/canton-network-internal/16721 |
@stas-sbi you are still a week behind, might have missed a git fetch/pull before the rebase? |
bb4853d
to
c1cb8fb
Compare
sorry, forgot to fetch, it should be better now |
/cluster_test |
Deploy cluster test triggered for Commit c1cb8fba1778f3f063b12cbc01f5c84383623c05 in , please contact a Contributor to approve it in CircleCI: https://app.circleci.com/pipelines/github/DACH-NY/canton-network-internal/16730 |
tests passed, thank you! I'll merge on Monday as I'm out tomorrow and want to be around if it blows up in one of our internal test clusters. |
@giner unfortunately the DCO check seems stuck for reasons I don't understand. Could you please push a new commit to see if that tries to trigger it. |
Set MALLOC_ARENA_MAX to 2 by default and introduce environment variable SPLICE_MALLOC_ARENA_MAX as an option to override this value. This can decrease resident memory footprint for JVM process reduce unnecessary OOM kills by the OS. Here are the details: - By default on 64-bit systems the limit for number of arenas in glibc malloc is determined as 8 * num_of_cores (in our case it is the max arenas becomes 8 *16 = 128) - Container limits are not respected - Each Arena allocates one heap of 64 MiB at the time of creation - A new Arena is created each time when malloc is called and all existing Arenas are locked by other threads - JVM based software tends to create a lot of threads, at the same time it has it's own memory management so it doesn't call malloc frequently. Memory waste from a large number of arenas is quite significant on machines with many physical cores and this waste is especially noticeable in smaller apps. Signed-off-by: Stanislav German-Evtushenko <ginermail@gmail.com>
c1cb8fb
to
b7dcace
Compare
done |
thank you! That seems to have worked |
Set MALLOC_ARENA_MAX to 2 by default and introduce environment variable SPLICE_MALLOC_ARENA_MAX as an option to override this value. This can decrease resident memory footprint for JVM process reduce unnecessary OOM kills by the OS.
Here are the details: