Skip to content

test_runner: propagate --experimental-config-file to child processes #59047

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sjwhole
Copy link

@sjwhole sjwhole commented Jul 13, 2025

Summary

This PR fixes an issue where --experimental-config-file was not being propagated to child test processes, causing experimental features defined in the config file to not work in test isolation mode.

The solution implements a more nuanced approach that:

  • Allows --experimental-config-file to be propagated to child processes
  • Prevents duplicate coverage reports by disabling coverage in child processes via NODE_TEST_DISABLE_COVERAGE environment variable
  • Prevents duplicate coverage reports by using the existing NODE_TEST_CONTEXT environment variable to disable
    coverage in child processes

Changes

  1. lib/internal/test_runner/runner.js:

    • Removed --experimental-config-file from kFilterArgValues to allow propagation
    • Added NODE_TEST_DISABLE_COVERAGE=1 to child process environment
  2. lib/internal/test_runner/utils.js:

    • Modified coverage detection to check NODE_TEST_DISABLE_COVERAGE environment variable
    • Modified coverage detection to check NODE_TEST_CONTEXT environment variable (which is already set for child processes)
  3. test/parallel/test-runner-cli.js:

    • Updated test expectations to verify config propagation works correctly
    • Maintained expectation of single coverage report
  4. test/fixtures/test-runner/options-propagation/experimental-config-file.test.mjs:

    • Updated test to expect config file to be received in child process

Fixes: #59021

Allow --experimental-config-file to be passed to child test processes
while preventing duplicate coverage reports by disabling coverage
collection in child processes through NODE_TEST_DISABLE_COVERAGE
environment variable.

This fixes config file options not being applied in child processes
while maintaining the fix for duplicate coverage reports.

Fixes: nodejs#59021
@nodejs-github-bot
Copy link
Collaborator

Review requested:

  • @nodejs/test_runner

@nodejs-github-bot nodejs-github-bot added needs-ci PRs that need a full CI run. test_runner Issues and PRs related to the test runner subsystem. labels Jul 13, 2025
…sses

Use the existing NODE_TEST_CONTEXT environment variable to prevent
duplicate coverage reports in child processes instead of introducing
a new NODE_TEST_DISABLE_COVERAGE variable. This approach is more
consistent with the existing codebase patterns.

The NODE_TEST_CONTEXT variable is already used to identify when
running in a child process context, making it the appropriate
mechanism for controlling coverage collection behavior.

Fixes: nodejs#59021
@MoLow MoLow added the request-ci Add this label to start a Jenkins CI on a PR. label Jul 14, 2025
Copy link

codecov bot commented Jul 14, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 90.05%. Comparing base (049664b) to head (25b0491).
Report is 11 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #59047      +/-   ##
==========================================
- Coverage   90.06%   90.05%   -0.01%     
==========================================
  Files         645      645              
  Lines      189130   189130              
  Branches    37094    37098       +4     
==========================================
- Hits       170339   170323      -16     
- Misses      11511    11516       +5     
- Partials     7280     7291      +11     
Files with missing lines Coverage Δ
lib/internal/test_runner/runner.js 92.69% <ø> (-0.01%) ⬇️
lib/internal/test_runner/utils.js 60.53% <100.00%> (+0.06%) ⬆️

... and 25 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Member

@JakobJingleheimer JakobJingleheimer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks for tackling this!

I think though that we need more end to end tests related to config propagation so we end the musical chairs.

Ordinarily, I would say tackle in a follow-up PR, but we've already broken one thing fixing another a couple times now.

Perhaps those tests could be added in a separate PR, and the ones that fail we mark "expected to fail", and land that first. Then, in this PR unmark the addressed failures. If any remain in "expected to fail", we create tickets to subsequently fix.

@sjwhole
Copy link
Author

sjwhole commented Jul 14, 2025

@JakobJingleheimer Thanks for the feedback! I'd like to create a comprehensive test suite to prevent
future regressions with config propagation.

Could you provide more specific guidance on what end-to-end tests you'd like to see? I'm thinking of
covering:

Config file propagation scenarios:
- Experimental flags that should propagate to child processes
- Flags that should NOT propagate (like reporters)
- Coverage deduplication between parent/child processes

Awesome, thanks for tackling this!

I think though that we need more end to end tests related to config propagation so we end the musical chairs.

Ordinarily, I would say tackle in a follow-up PR, but we've already broken one thing fixing another a couple times now.

Perhaps those tests could be added in a separate PR, and the ones that fail we mark "expected to fail", and land that first. Then, in this PR unmark the addressed failures. If any remain in "expected to fail", we create tickets to subsequently fix.

Copy link
Member

@pmarchini pmarchini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm temporarily blocking the PR to better discuss this matter.

One important note: we deliberately removed this flag, as it could lead to side effects — for example, the duplicated report we saw with @JakobJingleheimer.
Reintroducing it could cause potential issues, such as the one described here: #58828.

I believe we might address this problem by supporting a new way to filter options, regardless of whether a config file is present.
From my perspective, we should be able to define user-defined options without requiring the runner to know the source of the option (flag or configuration file).
I'm still considering a potential solution that avoids hardcoding logic in conjunction with configurations.

Edit: We might end up deciding that we're fine with filtering via NODE_TEST_CONTEXT. The reason I removed this from propagation was that my initial intention, when I first introduced the testRunner namespace in the config file, was to avoid propagation.
I don't have strong opinions against using the environment variable, but I want to highlight that this creates a mismatch between how the config file options are filtered compared to the default approach.

cc @nodejs/test_runner

@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Jul 15, 2025
@nodejs-github-bot
Copy link
Collaborator

@JakobJingleheimer
Copy link
Member

JakobJingleheimer commented Jul 18, 2025

I think maybe the approach we want is to explicitly allow and disallow test-runner options to propagate. In that case, we'd want to have a test that checks the full list of test-runner config options and fails when any option are not explicitly allowed/disallowed; this will catch when a new option is added but wasn't accounted for (which I imagine will be very easy to occur).

I think what should happen is to compose the list of config options for the main thread, and then compose a subset for runner that allows/disallows specific ones, and then merges those into the spawn's env config (the child process should never receive the raw).

These are the test-runner config options I found (there my be others, but this looks like all of 'em):

Caution

--test is SUPER problematic because it occur last.

Option File field Propagate
experimental-test-module-mocks
test
test-concurrency
experimental-test-coverage
test-coverage-branches
test-coverage-functions
test-coverage-lines
test-coverage-exclude
test-coverage-include
test-force-exit
test-global-setup
test-isolation N/A
test-name-pattern
test-only
test-reporter
test-reporter-destination
test-shard
test-skip-pattern
test-timeout
test-update-snapshots
experimental-config-file
experimental-default-config-file
env-file
env-file-if-exists

Maaaaybe we need to expand the full list to all options though?

I'd like to create a comprehensive test suite to prevent future regressions with config propagation.

Could you provide more specific guidance on what end-to-end tests you'd like to see? I'm thinking of covering:

Config file propagation scenarios:

  • Experimental flags that should propagate to child processes
  • Flags that should NOT propagate (like reporters)
  • Coverage deduplication between parent/child processes

@pmarchini
Copy link
Member

Hey @JakobJingleheimer,

I think what should happen is to compose the list of config options for the main thread, and then compose a subset for runner that allows/disallows specific ones, and then merges those into the spawn's env config (the child process should never receive the raw).

This is already happening with one exception: the configuration file.

At the moment, the biggest issue is that we're filtering specific flags based on the argv flags of the main process here:

function filterExecArgv(arg, i, arr) {
return !ArrayPrototypeIncludes(kFilterArgs, arg) &&
!ArrayPrototypeSome(kFilterArgValues, (p) => arg === p || (i > 0 && arr[i - 1] === p) || StringPrototypeStartsWith(arg, `${p}=`));
}

This wasn't a problem until we introduced the config file, which is passed as a single flag.
If this flag is filtered, it completely blocks the propagation of whatever is contained inside the file.

For this reason, I agree with you regarding the behaviour.
What I would like to see is an internal API that provides all the options, regardless of their source (CLI, env, config file), which the runner can use to properly filter and propagate.

I was thinking about reusing:

// getCLIOptionsValues() would serialize the option values from C++ land.
// It would error if the values are queried before bootstrap is
// complete so that we don't accidentally include runtime-dependent
// states into a runtime-independent snapshot.
function getCLIOptionsFromBinding() {
  return optionsDict ??= getCLIOptionsValues();
}

(from options.js)

But this function retrieves all the options, not only the user input.
This might be a non-issue considering that this operation must be done only once in the entire test lifecycle, and it needs to be converted into a set of flags (I'm not sure whether we have a better solution to propagate options to a new process), as we're doing here:

ParseResult ConfigReader::ProcessOptionValue(

WDYT?

@JakobJingleheimer
Copy link
Member

What I would like to see is an internal API that provides all the options, regardless of their source (CLI, env, config file), which the runner can use to properly filter and propagate.

Yes, that's what I meant 🙂

But actually, maybe this util is broadly useful? Now that I think of it, does getOption consider the different sources (I would assume so)? If so, that may be a perf hit each time it's called. In which case, the options dict would be very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-ci PRs that need a full CI run. test_runner Issues and PRs related to the test runner subsystem.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

--experimental-config-file broken in tests in 24.4.0
5 participants