I have a large repository, containing ~500 000 files (in a large directory tree). Running codeowners -u on it requires allowing around 8 GB RAM to NodeJS (--max_old_space_size=8000) and I see it increasing as the scan progresses. It's an issue both for running locally an in an automatic pipeline.
I didn't expect filesystem traversal to require linearly increasing memory usage. Is there a reason for it?