Skip to content

Decrease memory footprint of JVM-based apps by setting MALLOC_ARENA_MAX #316

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

giner
Copy link
Contributor

@giner giner commented Apr 8, 2025

Set MALLOC_ARENA_MAX to 2 by default and introduce environment variable SPLICE_MALLOC_ARENA_MAX as an option to override this value. This can decrease resident memory footprint for JVM process reduce unnecessary OOM kills by the OS.

Here are the details:

  • By default on 64-bit systems the limit for number of arenas in glibc malloc is determined as 8 * num_of_cores (in our case it is the max arenas becomes 8 *16 = 128)
  • Container limits are not respected
  • Each Arena allocates one heap of 64 MiB at the time of creation
  • A new Arena is created each time when malloc is called and all existing Arenas are locked by other threads
  • JVM based software tends to create a lot of threads, at the same time it has it's own memory management so it doesn't call malloc frequently. Memory waste from a large number of arenas is quite significant on machines with many physical cores and this waste is especially noticeable in smaller apps.

@giner giner force-pushed the stas/jvm_decrease_memory_footprint_by_setting_malloc_arena_max branch 2 times, most recently from 17bfb9a to 78db949 Compare April 8, 2025 06:11
@moritzkiefer-da
Copy link
Contributor

@giner thanks for the contribution! We're currently planning to merge this after the upgrade to Canton 3.3/splice 0.4 just to avoid coupling too many changes to the HDM as that one already introduces some risk.

@giner
Copy link
Contributor Author

giner commented May 2, 2025

Sounds good

@giner
Copy link
Contributor Author

giner commented Jun 4, 2025

@moritzkiefer-da friendly ping

@moritzkiefer-da
Copy link
Contributor

@giner thanks for the ping, I suggest we wait until mainnet is actually on 0.4.x. There is still a chance we end up directly upgrading to 0.4.1 there or something like that depending on what issues find in 0.4.0.

@moritzkiefer-da
Copy link
Contributor

/cluster_test

Copy link

Deploy cluster test triggered for Commit 78db94934e52c30ca168b38e6123e93ecfc0003b in , please contact a Contributor to approve it in CircleCI: https://app.circleci.com/pipelines/github/DACH-NY/canton-network-internal/16667

@moritzkiefer-da
Copy link
Contributor

@giner could you rebase this please

@giner giner force-pushed the stas/jvm_decrease_memory_footprint_by_setting_malloc_arena_max branch from 78db949 to bb4853d Compare June 26, 2025 12:51
@stas-sbi
Copy link
Contributor

rebased

@moritzkiefer-da
Copy link
Contributor

/cluster_test

Copy link

Deploy cluster test triggered for Commit bb4853d5df536336c942fd4747d0863061c613b7 in , please contact a Contributor to approve it in CircleCI: https://app.circleci.com/pipelines/github/DACH-NY/canton-network-internal/16721

@moritzkiefer-da
Copy link
Contributor

@stas-sbi you are still a week behind, might have missed a git fetch/pull before the rebase?

@giner giner force-pushed the stas/jvm_decrease_memory_footprint_by_setting_malloc_arena_max branch from bb4853d to c1cb8fb Compare June 26, 2025 13:48
@giner
Copy link
Contributor Author

giner commented Jun 26, 2025

sorry, forgot to fetch, it should be better now

@moritzkiefer-da
Copy link
Contributor

/cluster_test

Copy link

Deploy cluster test triggered for Commit c1cb8fba1778f3f063b12cbc01f5c84383623c05 in , please contact a Contributor to approve it in CircleCI: https://app.circleci.com/pipelines/github/DACH-NY/canton-network-internal/16730

@moritzkiefer-da
Copy link
Contributor

tests passed, thank you! I'll merge on Monday as I'm out tomorrow and want to be around if it blows up in one of our internal test clusters.

@moritzkiefer-da
Copy link
Contributor

@giner unfortunately the DCO check seems stuck for reasons I don't understand. Could you please push a new commit to see if that tries to trigger it.

Set MALLOC_ARENA_MAX to 2 by default and introduce environment variable
SPLICE_MALLOC_ARENA_MAX as an option to override this value.  This can
decrease resident memory footprint for JVM process reduce unnecessary
OOM kills by the OS.

Here are the details:
- By default on 64-bit systems the limit for number of arenas in glibc
  malloc is determined as 8 * num_of_cores (in our case it is the max
  arenas becomes 8 *16 = 128)
- Container limits are not respected
- Each Arena allocates one heap of 64 MiB at the time of creation
- A new Arena is created each time when malloc is called and all
  existing Arenas are locked by other threads
- JVM based software tends to create a lot of threads, at the same time
  it has it's own memory management so it doesn't call malloc
  frequently. Memory waste from a large number of arenas is quite
  significant on machines with many physical cores and this waste is
  especially noticeable in smaller apps.

Signed-off-by: Stanislav German-Evtushenko <ginermail@gmail.com>
@giner giner force-pushed the stas/jvm_decrease_memory_footprint_by_setting_malloc_arena_max branch from c1cb8fb to b7dcace Compare June 30, 2025 09:37
@stas-sbi
Copy link
Contributor

done

@moritzkiefer-da
Copy link
Contributor

thank you! That seems to have worked

@moritzkiefer-da moritzkiefer-da merged commit d140107 into hyperledger-labs:main Jun 30, 2025
59 checks passed
@martinflorian-da martinflorian-da removed their assignment Jun 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants