Skip to content

publish rails application name to Process Discovery after initialization#5504

Merged
wantsui merged 28 commits intomasterfrom
call-publish-again-for-memfd-rails-app-name
Apr 1, 2026
Merged

publish rails application name to Process Discovery after initialization#5504
wantsui merged 28 commits intomasterfrom
call-publish-again-for-memfd-rails-app-name

Conversation

@wantsui
Copy link
Copy Markdown
Collaborator

@wantsui wantsui commented Mar 25, 2026

What does this PR do?

Calls on Process Discovery to publish after getting access to the rails application name.

Motivation:

This came up while @raphaelgavache was explaining to me how process discovery works. This PR is a follow up to #5468, because the default behavior for Process Discovery is to publish the tags before the rails app has initialized.

This ensures we publish once after we obtain the right information.

Change log entry

Additional Notes:

Yes. Since folks are already reviewing #5468, I didn't want to complicate that PR with this change.

How to test the change?

  1. System Tests: Enable all process discovery tests for Ruby system-tests#6637
  2. I also manually tested this E2E to confirm that it would work.

@wantsui wantsui added the AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos label Mar 25, 2026
@github-actions github-actions bot added the core Involves Datadog core libraries label Mar 25, 2026
@wantsui wantsui marked this pull request as ready for review March 25, 2026 16:18
@wantsui wantsui requested a review from a team as a code owner March 25, 2026 16:18
@pr-commenter
Copy link
Copy Markdown

pr-commenter bot commented Mar 25, 2026

Benchmarks

Benchmark execution time: 2026-03-27 19:39:10

Comparing candidate commit 5913f2d in PR branch call-publish-again-for-memfd-rails-app-name with baseline commit e9cf9cb in branch add-application-name-rails-process-tags.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 46 metrics, 0 unstable metrics.

Explanation

This is an A/B test comparing a candidate commit's performance against that of a baseline commit. Performance changes are noted in the tables below as:

  • 🟩 = significantly better candidate vs. baseline
  • 🟥 = significantly worse candidate vs. baseline

We compute a confidence interval (CI) over the relative difference of means between metrics from the candidate and baseline commits, considering the baseline as the reference.

If the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD), the change is considered significant.

Feel free to reach out to #apm-benchmarking-platform on Slack if you have any questions.

More details about the CI and significant changes

You can imagine this CI as a range of values that is likely to contain the true difference of means between the candidate and baseline commits.

CIs of the difference of means are often centered around 0%, because often changes are not that big:

---------------------------------(------|---^--------)-------------------------------->
                              -0.6%    0%  0.3%     +1.2%
                                 |          |        |
         lower bound of the CI --'          |        |
sample mean (center of the CI) -------------'        |
         upper bound of the CI ----------------------'

As described above, a change is considered significant if the CI is entirely outside the configured SIGNIFICANT_IMPACT_THRESHOLD (or the deprecated UNCONFIDENCE_THRESHOLD).

For instance, for an execution time metric, this confidence interval indicates a significantly worse performance:

----------------------------------------|---------|---(---------^---------)---------->
                                       0%        1%  1.3%      2.2%      3.1%
                                                  |   |         |         |
       significant impact threshold --------------'   |         |         |
                      lower bound of CI --------------'         |         |
       sample mean (center of the CI) --------------------------'         |
                      upper bound of CI ----------------------------------'

@wantsui wantsui requested a review from raphaelgavache March 26, 2026 13:40
Copy link
Copy Markdown
Member

@p-datadog p-datadog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall — had a question about what happens during reconfiguration.

Copy link
Copy Markdown
Member

@p-datadog p-datadog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 LGTM

Base automatically changed from add-application-name-rails-process-tags to master April 1, 2026 22:08
@wantsui wantsui requested review from a team as code owners April 1, 2026 22:08
@wantsui wantsui requested a review from vpellan April 1, 2026 22:08
@datadog-prod-us1-3
Copy link
Copy Markdown

datadog-prod-us1-3 bot commented Apr 1, 2026

✅ Tests

🎉 All green!

❄️ No new flaky tests detected
🧪 All tests passed

🎯 Code Coverage (details)
Patch Coverage: 92.86%
Overall Coverage: 95.35% (-0.96%)

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 9d1bddc | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

@wantsui wantsui merged commit 35d11a4 into master Apr 1, 2026
629 checks passed
@wantsui wantsui deleted the call-publish-again-for-memfd-rails-app-name branch April 1, 2026 23:04
@github-actions github-actions bot added this to the 2.31.0 milestone Apr 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

AI Generated Largely based on code generated by an AI or LLM. This label is the same across all dd-trace-* repos core Involves Datadog core libraries

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants