Skip to content

Conversation

brunocroh
Copy link
Contributor

@brunocroh brunocroh commented Aug 27, 2025

According to nodejs/performance#186

Update all benchmarks in the util namespace by setting the n value to the result from calibrate-n.

Util Calibration Benchmarks

Util Function Duration (minutes) From Value To Value
util.text-encoder 42.00 1e6 1e3
util.deprecate 1.00 1e5 1e3
util.format 2.00 1e6 1e2
util.get-callsite 5.50 1e6 1e2
util.inspect-array 1.30 5e3 10
util.inspect-proxy 0.47 1e5 1e2
util.inspect 5.14 8e4 1e3
util.parse-env 0.34 3e4 10
util.priority-queue 0.04 1e5 100
util.splice-one 2.12 5e6 1e3
util.style-text 0.48 1e3 1e2
util.type-check 0.28 1e6 1e3

@nodejs-github-bot
Copy link
Collaborator

Review requested:

  • @nodejs/performance

@nodejs-github-bot nodejs-github-bot added benchmark Issues and PRs related to the benchmark subsystem. util Issues and PRs related to the built-in util module. labels Aug 27, 2025
@brunocroh
Copy link
Contributor Author

I see some benchmarks using syntax like 1e3 and others using 1000. Which one is preferable, or does it not matter?

According to nodejs/performance#186
this benchmark takes 42 minutes to run a single run.
So using calibrate-n script it suggests reduce it from 1e6 to 1e3
this should improve it.
@brunocroh brunocroh force-pushed the feat/calibrate-util-text-encoder branch from 7c95d23 to 5cd6f8a Compare August 27, 2025 22:00
According to nodejs/performance#186
this benchmark takes 1 minute to run a single run.
So using calibrate-n script it suggests reduce it from 1e5 to 1e3
@Uzlopak
Copy link
Contributor

Uzlopak commented Aug 28, 2025

1e3 is shorter than 1000, thats all.

@brunocroh brunocroh force-pushed the feat/calibrate-util-text-encoder branch from dbc113d to 3bfb15a Compare August 28, 2025 08:18
@brunocroh brunocroh changed the title benchmark: calibrate util.text-encoder benchmark: calibrate util.* Aug 28, 2025
According to nodejs/performance#186
this benchmark takes 2 minute to run a single run.
So using calibrate-n script it suggests reduce it from 1e6 to 1e2
According to nodejs/performance#186
this benchmark takes 5.5 minutes to run a single run.
So using calibrate-n script it suggests reduce it from 1e6 to 1e2
According to nodejs/performance#186
this benchmark takes 1.3 minutes to run a single run.
So using calibrate-n script it suggests reduce it from 5e3 to 10
According to nodejs/performance#186
this benchmark takes 0.47 minute to run a single run.
So using calibrate-n script it suggests reduce it from 1e5 to 1e2
According to nodejs/performance#186
this benchmark takes 5.14 minutes to run a single run.
So using calibrate-n script it suggests reduce it from 8e4 to 1e3
According to nodejs/performance#186
this benchmark takes 0.34 minute to run a single run.
So using calibrate-n script it suggests reduce it from 3e4 to 10
According to nodejs/performance#186
this benchmark takes 0.04 minute to run a single run.
So using calibrate-n script it suggests reduce it from 1e5 to 100
According to nodejs/performance#186
this benchmark takes 2.12 minutes to run a single run.
So using calibrate-n script it suggests reduce it from 5e6 to 1e3
According to nodejs/performance#186
this benchmark takes 0.48 minutes to run a single run.
So using calibrate-n script it suggests reduce it from 1e3 to 1e2
According to nodejs/performance#186
this benchmark takes 0.28 minute to run a single run.
So using calibrate-n script it suggests reduce it from 1e6 to 1e3
@brunocroh brunocroh force-pushed the feat/calibrate-util-text-encoder branch from 3bfb15a to 96f2e16 Compare August 28, 2025 08:31
@RafaelGSS
Copy link
Member

@brunocroh can you share the machine you used to run calibrate-n? We tend to use the more dedicated machine possible. I had to set up a hetzner 4vCPUs 16gb dedicated machine for my recent benchmark updates.

If your machine is at least twice as good as the one I mentioned, it might get a different CV% on these machines. If that's the case, I suggest you increase by one 0 for the ones that reduced drastically, e.g: 8e4 -> 1e3

@brunocroh
Copy link
Contributor Author

@brunocroh can you share the machine you used to run calibrate-n? We tend to use the more dedicated machine possible. I had to set up a hetzner 4vCPUs 16gb dedicated machine for my recent benchmark updates.

If your machine is at least twice as good as the one I mentioned, it might get a different CV% on these machines. If that's the case, I suggest you increase by one 0 for the ones that reduced drastically, e.g: 8e4 -> 1e3

Nice, I ran it on a Mac M2 with 12 CPU cores and 16 GB of RAM.
I also have a VPS that I can use for that purpose, 4 vCPUs and 12 GB of RAM. Let me retest it there.

@brunocroh
Copy link
Contributor Author

@brunocroh can you share the machine you used to run calibrate-n? We tend to use the more dedicated machine possible. I had to set up a hetzner 4vCPUs 16gb dedicated machine for my recent benchmark updates.

If your machine is at least twice as good as the one I mentioned, it might get a different CV% on these machines. If that's the case, I suggest you increase by one 0 for the ones that reduced drastically, e.g: 8e4 -> 1e3

I set up a 4 vCPU, 16 GB dedicated machine to test these benchmarks again, and I reran the ones that took more than 5 minutes (text-encoder, inspect and get-callsite) to see if we could reduce n somehow, but unfortunately I couldn’t find any window for it.

So I think the best option is to close this PR without any changes for now.

@brunocroh brunocroh closed this Aug 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Issues and PRs related to the benchmark subsystem. util Issues and PRs related to the built-in util module.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants