Skip to content

Reduce poi memory#712

Open
jcw780 wants to merge 24 commits intoCaffeineMC:developfrom
jcw780:reduce_poi_memory
Open

Reduce poi memory#712
jcw780 wants to merge 24 commits intoCaffeineMC:developfrom
jcw780:reduce_poi_memory

Conversation

@jcw780
Copy link
Contributor

@jcw780 jcw780 commented Jan 30, 2026

Reduce point of interest memory impact by not adding empty subchunks to the storage hashmap and by unloading non-portal searched points of interest.

Significant updates were made so extensive testing is likely required.

@jcw780 jcw780 marked this pull request as draft January 30, 2026 02:53
@jcw780 jcw780 marked this pull request as ready for review February 10, 2026 01:09
@2No2Name
Copy link
Member

Does this PR only reduce memory usage, or did you also see any performance impact? I think we already talked about checking how lodestone compasses may be affected

@jcw780
Copy link
Contributor Author

jcw780 commented Feb 27, 2026

Does this PR only reduce memory usage, or did you also see any performance impact? I think we already talked about checking how lodestone compasses may be affected

Primary objective is to reduce memory impact - mitigate the point of interest memory leak. I did not notice significant mspt changes compared to existing benchmarks - would expect some small changes since the storage hashmap is now a lot smaller. Some effort was taken to reduce the effect of removing Optional.empty() in general use.

The new logic does seem to have significantly less unloading impact than a naive implementation because there are usually much fewer hashmap entries to remove. For instance, to unload a chunk without PoiSections it will only have to remove the Bitset from column vs that plus 16 or 24 - sections in chunk - Optional.empties() in the storage hashmap.

Stuff that constantly getOrLoad points of interest - like compasses inventoryTicked by LivingEntities - will have to frequently load the chunk to border or higher then unload it entirely - to empty - and for the chunk to not have been searched by portal ensureLoadedAndValid to observe persistent lag effects. Otherwise it will just be one read just after it gets unloaded.

There may be performance benefits if a server would otherwise be approaching memory limit without the optimizations - e.g. long running server where a lot of chunks were loaded at some point. But I have not tested that exact case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants