feat: [DEVOPS-2394] fork coralogix/opentelemetry-lambda and publish extend-nodejs layer#7
feat: [DEVOPS-2394] fork coralogix/opentelemetry-lambda and publish extend-nodejs layer#7
Conversation
Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>
Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>
Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>
Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com>
- Remove dotnet/, java/, ruby/, go/ language dirs and associated CI workflows (ci-java.yml, release-layer-java.yml, release-layer-ruby.yml). We only ship Node.js and Python Lambda layers; upstream's other language layers are noise. - Rewrite README.md for Extend fork scope: layer name, layout, publish flow, consumer wiring via NodeLambdaBuilder.otelTracingProps. - Add UPSTREAM.md documenting fork-point SHAs for both coralogix/opentelemetry-lambda and coralogix/opentelemetry-js-contrib, plus the manual sync process and the pin-update checklist (publish-sandbox.sh, workflow, UPSTREAM.md all together). Automation tracked in DEVOPS-2502.
Removed unused package ecosystems for gradle, pip, and bundler. Updated npm configuration to include registries and cooldown settings.
Removed unused Java and Ruby sections from release.yml
- Drop opentelemetry-js clone + OPENTELEMETRY_JS_PATH. Unused since cx-js was dropped (we resolve @opentelemetry/instrumentation from npm now). - Pin CX_CONTRIB_SHA to match publish-sandbox.sh and the publish workflow so local builds don't drift from CI. - Unify clone path under .build-cache/opentelemetry-js-contrib so both scripts share one cache on dev machines. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Remove python/ entirely. Only nodejs layers ship; if Python autoinstrumentation is needed later, start from origin/python-instrumentation. Drop the commented-out pip block in dependabot.yml and the python/README.md link in README.md. - UPSTREAM.md: replace the scratch-path reference to the fork-research doc with the Confluence page link. Add remote-setup block (remotes aren't checked in, fresh clones only have origin). Add a third fork-points row for open-telemetry/opentelemetry-lambda with the tag (layer-nodejs/0.10.0, c9e67c4) coralogix last merged in via 436f3d0. Add sync block + note that coralogix absorbs upstream-upstream selectively (tags or cherry-picks) so the sync skill should walk by patch-id, not merge-base. - .gitignore: add .claude/worktrees/ and extend/plan-*.md so transient agent scaffolding doesn't leak into commits again. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThe pull request substantially restructures the repository by removing multiple GitHub Actions CI/CD workflows and language-specific implementations. Deleted workflows include collectors, language-specific builds (Java, Python, Node.js, Ruby), and release pipelines. A new workflow for publishing an Extend OpenTelemetry layer is added. Language directories (Java, Python, Ruby, Go) and their sample applications are largely removed. Dependabot and release configurations are deleted or modified. Coralogix exporter endpoints are consolidated to use unified ingress. Build scripts are updated to support new paths. The code owner assignment changes, and documentation is revised to reflect the fork's scope. |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
collector/go.mod (1)
24-25:⚠️ Potential issue | 🟠 MajorUpgrade contrib providers to compatible versions.
Lines 24-25 pin
s3providerandsecretsmanagerprovideratv0.109.0, which is incompatible with the upgraded collector modules atv0.150.0/v1.56.0(lines 29-37). The compatible version for these providers isv0.149.0, which depends onconfmap v1.55.0andfeaturegate v1.55.0that align with the core module versions. Upgrade both providers tov0.149.0.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@collector/go.mod` around lines 24 - 25, Update the pinned contrib provider versions for s3provider and secretsmanagerprovider from v0.109.0 to v0.149.0 so they are compatible with the upgraded collector modules; locate the go.mod entries for github.com/open-telemetry/opentelemetry-collector-contrib/confmap/provider/s3provider and github.com/open-telemetry/opentelemetry-collector-contrib/confmap/provider/secretsmanagerprovider and change their version tokens to v0.149.0, then run go mod tidy to refresh dependencies.nodejs/README.md (1)
15-32:⚠️ Potential issue | 🟡 MinorDocumentation references removed build dependencies.
The README lists three environment variables as required (lines 18–21):
OPENTELEMETRY_JS_PATH,OPENTELEMETRY_JS_CONTRIB_PATH, andIITM_PATH. However, the build scripts only useOPENTELEMETRY_JS_CONTRIB_PATH. Update the documentation to remove the references toOPENTELEMETRY_JS_PATHandIITM_PATH.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nodejs/README.md` around lines 15 - 32, Update the README to reflect the actual build requirements by removing the two unused environment variable references and example exports for OPENTELEMETRY_JS_PATH and IITM_PATH; leave only OPENTELEMETRY_JS_CONTRIB_PATH mentioned and keep the example export showing export OPENTELEMETRY_JS_CONTRIB_PATH=./opentelemetry-js-contrib-cx. Remove any sentences that state OPENTELEMETRY_JS_PATH or IITM_PATH are required and delete their export lines from the code block so the docs match the build scripts.collector/processor/decoupleprocessor/factory.go (1)
63-71:⚠️ Potential issue | 🔴 CriticalIncorrect API function call - use
NewTracesProcessorinstead.The code calls
processorhelper.NewTraces()which does not exist in v0.150.0. The correct API isprocessorhelper.NewTracesProcessor(). Update lines 63, 84, and 105 to use the proper function names:NewTracesProcessor,NewMetricsProcessor, andNewLogsProcessorrespectively. The function signatures and arguments are compatible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@collector/processor/decoupleprocessor/factory.go` around lines 63 - 71, The factory currently calls non-existent helper constructors; replace the incorrect processorhelper.NewTraces / NewMetrics / NewLogs calls with the v0.150.0 API names processorhelper.NewTracesProcessor, processorhelper.NewMetricsProcessor and processorhelper.NewLogsProcessor respectively (preserving the same ctx, params, cfg, next, handler functions like dp.processTraces/dp.processMetrics/dp.processLogs and options such as processorhelper.WithCapabilities(processorCapabilities) and processorhelper.WithShutdown(dp.shutdown)); ensure you update all three call sites so signatures and passed arguments remain unchanged.
🧹 Nitpick comments (4)
scripts/build_nodejs_layer.sh (2)
87-88: Consider pinning versions for global npm tools.Installing
copyfiles,bestzip, andrimrafwithout version pins (npm install -g copyfiles bestzip rimraf) can lead to non-reproducible builds if these tools release breaking changes.♻️ Proposed fix to pin tool versions
# Install copyfiles and bestzip # used by `npm run clean/compile` -npm install -g copyfiles bestzip rimraf +npm install -g copyfiles@2 bestzip@4 rimraf@5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_nodejs_layer.sh` around lines 87 - 88, The global npm install command `npm install -g copyfiles bestzip rimraf` should pin specific versions to ensure reproducible builds; update the script (the line installing global tools) to install explicit versions (e.g., `copyfiles@x.y.z bestzip@x.y.z rimraf@x.y.z`) or read versions from variables at the top of the script, and/or move these tools into package.json devDependencies and use npm scripts (or npx) instead of globally installing to guarantee consistent tool versions across environments.
96-97: Blanket error suppression may hide legitimate failures.The
|| trueat the end of line 97 suppresses all errors from thefind ... -exec rm -rfcommand, including permission errors or filesystem issues that could indicate a real problem.Consider suppressing only the expected "directory not found during traversal" errors:
♻️ More targeted error handling
-find node_modules -type d \( -name "test" -o -name "tests" -o -name "docs" -o -name "doc" \) -exec rm -rf {} + 2>/dev/null || true +find node_modules -type d \( -name "test" -o -name "tests" -o -name "docs" -o -name "doc" \) -print0 2>/dev/null | xargs -0 rm -rf🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/build_nodejs_layer.sh` around lines 96 - 97, The blanket "|| true" at the end of the find+rm command hides all failures; remove it and instead suppress only the expected "No such file or directory" messages by filtering stderr for that pattern. Replace the existing "find node_modules -type d ... -exec rm -rf {} + 2>/dev/null || true" invocation with one that captures stderr and filters out the specific benign message (e.g., redirect stderr through grep -v "No such file or directory" to stderr) so permission or other real errors still surface; update the line containing the find ... -exec rm -rf invocation accordingly.extend/README.md (2)
30-42: Clarify that ARIZE_ and S3 env vars are conditionally required.*The table lists
ARIZE_API_KEY_SECRET,ARIZE_SPACE_ID,ARIZE_PROJECT_NAME, andARIZE_S3_BUCKET_NAMEunder "Required env vars", but these are only required when using the Arize/S3 configs — not for the default cx-only config.Consider restructuring to show which vars are required for each config:
📝 Suggested documentation structure
-Required env vars: +**Always required** (all configs): | Var | Source | Purpose | |-----|--------|---------| | `CX_SECRET` | existing | CX API key — Secrets Manager name or ARN | | `CX_APPLICATION` | existing | CX application tag | | `CX_SUBSYSTEM` | existing | CX subsystem tag | + +**Required for Arize configs** (`cx-arize`, `cx-arize-s3`): + +| Var | Source | Purpose | +|-----|--------|---------| | `ARIZE_API_KEY_SECRET` | new | Arize OTel API key — Secrets Manager name or ARN | | `ARIZE_SPACE_ID` | new | Arize space ID (Relay global ID) | | `ARIZE_PROJECT_NAME` | new | Arize project name | + +**Required for S3 archival** (`cx-arize-s3` only): + +| Var | Source | Purpose | +|-----|--------|---------| | `ARIZE_S3_BUCKET_NAME` | new | S3 bucket for archival | + +**Optional**: + +| Var | Default | Purpose | +|-----|---------|---------| | `CX_ENDPOINT` | optional | default `ingress.us2.coralogix.com:443` (unified ingress) | | `ARIZE_COLLECTOR_ENDPOINT` | optional | default `otlp.arize.com:443` (gRPC) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extend/README.md` around lines 30 - 42, Update the environment variables documentation to indicate that ARIZE_* and ARIZE_S3 variables are conditional: move `ARIZE_API_KEY_SECRET`, `ARIZE_SPACE_ID`, `ARIZE_PROJECT_NAME`, and `ARIZE_S3_BUCKET_NAME` out of the global "Required env vars" table and present them in a separate table or subsection titled "Required when using Arize/S3" (or include per-config tables for "CX-only" vs "Arize/S3"). Keep `CX_SECRET`, `CX_APPLICATION`, and `CX_SUBSYSTEM` in the CX-only required list, mark `CX_ENDPOINT` and `ARIZE_COLLECTOR_ENDPOINT` as optional with defaults, and add a short note explaining that the Arize/S3 vars are only needed if the Arize/S3 configuration is enabled.
46-46: Documentation mentions branch name but build uses pinned SHA.The docs say to use branch
coralogix-autoinstrumentation, butscripts/build-nodejs.shchecks out a specific SHA (3a9691a699ddd06c3644eec70bf4b50cc4217ba3) regardless of branch. This is actually safer for reproducibility.Consider updating the docs to reflect the pinned-SHA approach:
📝 Suggested clarification
-Follows upstream: `./scripts/build_nodejs_layer.sh` — requires a sibling checkout of `coralogix/opentelemetry-js-contrib` (branch `coralogix-autoinstrumentation`) set via `OPENTELEMETRY_JS_CONTRIB_PATH`. See `.github/workflows/publish-extend-otel-layer.yml` for the published flow. +Follows upstream: `./scripts/build_nodejs_layer.sh`. For local dev, `./scripts/build-nodejs.sh` auto-clones `coralogix/opentelemetry-js-contrib` at a pinned SHA to `.build-cache/`. Override with `OPENTELEMETRY_JS_CONTRIB_PATH` to use a local checkout. See `.github/workflows/publish-extend-otel-layer.yml` for the published flow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extend/README.md` at line 46, The README currently instructs using branch coralogix-autoinstrumentation but the build script (./scripts/build_nodejs_layer.sh / scripts/build-nodejs.sh) actually checks out a pinned SHA (3a9691a699ddd06c3644eec70bf4b50cc4217ba3); update the README text to state that the build uses a specific pinned commit for reproducibility, mention that OPENTELEMETRY_JS_CONTRIB_PATH can point to a sibling checkout but the script will reset/checkout the pinned SHA, and include the exact SHA used (3a9691a699ddd06c3644eec70bf4b50cc4217ba3) and a brief note linking to .github/workflows/publish-extend-otel-layer.yml for the published flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/dependabot.yml:
- Around line 38-39: Dependabot is using a CodeArtifact registry token that
expires and blocks npm updates; either add a scheduled token-rotation Action
that refreshes CODEARTIFACT_AUTH_TOKEN every ~10–11 hours and updates the
repository secret, or migrate the dependabot registry block (the registries: -
codeartifact entry and its codeartifact config) to OIDC by replacing the static
token with the new fields aws-region, account-id, role-name, domain, and
domain-owner so Dependabot can assume an AWS role via OIDC and eliminate manual
rotation; implement one of these strategies and update the
.github/dependabot.yml codeartifact registry entries accordingly.
In @.github/workflows/publish-extend-otel-layer.yml:
- Around line 76-79: The inline `with:` mappings for the
actions/download-artifact@v4 steps are invalid YAML; replace the inline braces
with proper block mappings so they parse correctly. Update the two occurrences
that use `with: { name: collector-${{ matrix.architecture }}, path: dl/collector
}` and `with: { name: nodejs-layer, path: dl/nodejs }` to use block-style keys
under `with:` (name: <value> and path: <value>) for the corresponding
actions/download-artifact@v4 steps so actionlint/yamllint pass.
- Around line 20-22: The workflow currently exposes AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY at workflow-level env; restrict them by removing those two
keys from the top-level env and add them only to the job "package-and-publish"
(or the specific publish step) so only the publish task receives secrets. Update
the env block for the "package-and-publish" job (or the publish step) to include
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} and AWS_SECRET_ACCESS_KEY:
${{ secrets.AWS_SECRET_ACCESS_KEY }} and ensure other jobs (collector build /
Node.js build) no longer inherit those env vars.
In `@collector/processor/decoupleprocessor/go.mod`:
- Around line 9-18: The go.mod entry for the OpenTelemetry confmap module is
pinned to v1.16.0 causing version skew with the rest of the Collector stack;
update the module line for go.opentelemetry.io/collector/confmap to v1.56.0 to
match the other collector modules (e.g., the surrounding
go.opentelemetry.io/collector/* entries), then run go mod tidy to reconcile
dependencies and verify builds/tests.
In `@nodejs/packages/cx-wrapper/package.json`:
- Line 38: The package version mismatch for OpenTelemetry will cause runtime
incompatibilities: update the dependency set so all related OTel packages use
the same minor version—either revert "@opentelemetry/instrumentation" back to
"0.213.0" or bump "@opentelemetry/instrumentation-http",
"@opentelemetry/instrumentation-grpc", and
"@opentelemetry/exporter-trace-otlp-proto" to "0.214.0" so all four packages
share the identical 0.214.0 (or all remain 0.213.0) version; modify the
package.json entries for those specific package names accordingly and run
install to verify no peer/version conflicts remain.
In `@scripts/build-nodejs.sh`:
- Around line 21-29: The script currently always treats CX_CONTRIB_CACHE (set
from OPENTELEMETRY_JS_CONTRIB_PATH or default) as a cache to fetch and checkout
CX_CONTRIB_SHA which will detach/overwrite a caller-supplied local checkout;
change the logic so when OPENTELEMETRY_JS_CONTRIB_PATH is set you only validate
that CX_CONTRIB_CACHE is a git repo with the expected commit (or at least exists
and is readable) and then reuse it without running git fetch/checkout; only
perform the fetch/checkout workflow (using CX_CONTRIB_REPO, git clone, git
fetch, git checkout with CX_CONTRIB_SHA) when OPENTELEMETRY_JS_CONTRIB_PATH is
unset and you are auto-resolving into the build cache, leaving a developer’s
local checkout untouched.
In `@scripts/publish-sandbox.sh`:
- Around line 21-37: The script currently always clones/fetches and checks out
CX_CONTRIB_SHA into CX_CONTRIB_CACHE, which mutates a caller-provided
OPENTELEMETRY_JS_CONTRIB_PATH; change the logic so that if
OPENTELEMETRY_JS_CONTRIB_PATH is already set (and non-empty) the script uses it
unchanged and skips cloning/fetching/checking out; only when
OPENTELEMETRY_JS_CONTRIB_PATH is unset should you set CX_CONTRIB_CACHE (from
OPENTELEMETRY_JS_CONTRIB_PATH default), perform mkdir/git clone/git fetch/git -C
... checkout of CX_CONTRIB_REPO at CX_CONTRIB_SHA and then export
OPENTELEMETRY_JS_CONTRIB_PATH="$(cd "$CX_CONTRIB_CACHE" && pwd)". Ensure
references to OPENTELEMETRY_JS_CONTRIB_PATH, CX_CONTRIB_CACHE, CX_CONTRIB_REPO,
CX_CONTRIB_SHA and the git -C ... checkout/fetch commands are used to locate the
code to change.
In `@UPSTREAM.md`:
- Line 5: The sentence on Line 5 of UPSTREAM.md incorrectly lists only two pin
locations; update that sentence to state three pin locations so it matches Lines
63-68—mentioning scripts/publish-sandbox.sh,
.github/workflows/publish-extend-otel-layer.yml, and scripts/build-nodejs.sh
(the `coralogix/opentelemetry-js-contrib` pin tracked at build time) so the doc
consistently reflects all places that must be updated to avoid SHA drift.
---
Outside diff comments:
In `@collector/go.mod`:
- Around line 24-25: Update the pinned contrib provider versions for s3provider
and secretsmanagerprovider from v0.109.0 to v0.149.0 so they are compatible with
the upgraded collector modules; locate the go.mod entries for
github.com/open-telemetry/opentelemetry-collector-contrib/confmap/provider/s3provider
and
github.com/open-telemetry/opentelemetry-collector-contrib/confmap/provider/secretsmanagerprovider
and change their version tokens to v0.149.0, then run go mod tidy to refresh
dependencies.
In `@collector/processor/decoupleprocessor/factory.go`:
- Around line 63-71: The factory currently calls non-existent helper
constructors; replace the incorrect processorhelper.NewTraces / NewMetrics /
NewLogs calls with the v0.150.0 API names processorhelper.NewTracesProcessor,
processorhelper.NewMetricsProcessor and processorhelper.NewLogsProcessor
respectively (preserving the same ctx, params, cfg, next, handler functions like
dp.processTraces/dp.processMetrics/dp.processLogs and options such as
processorhelper.WithCapabilities(processorCapabilities) and
processorhelper.WithShutdown(dp.shutdown)); ensure you update all three call
sites so signatures and passed arguments remain unchanged.
In `@nodejs/README.md`:
- Around line 15-32: Update the README to reflect the actual build requirements
by removing the two unused environment variable references and example exports
for OPENTELEMETRY_JS_PATH and IITM_PATH; leave only
OPENTELEMETRY_JS_CONTRIB_PATH mentioned and keep the example export showing
export OPENTELEMETRY_JS_CONTRIB_PATH=./opentelemetry-js-contrib-cx. Remove any
sentences that state OPENTELEMETRY_JS_PATH or IITM_PATH are required and delete
their export lines from the code block so the docs match the build scripts.
---
Nitpick comments:
In `@extend/README.md`:
- Around line 30-42: Update the environment variables documentation to indicate
that ARIZE_* and ARIZE_S3 variables are conditional: move
`ARIZE_API_KEY_SECRET`, `ARIZE_SPACE_ID`, `ARIZE_PROJECT_NAME`, and
`ARIZE_S3_BUCKET_NAME` out of the global "Required env vars" table and present
them in a separate table or subsection titled "Required when using Arize/S3" (or
include per-config tables for "CX-only" vs "Arize/S3"). Keep `CX_SECRET`,
`CX_APPLICATION`, and `CX_SUBSYSTEM` in the CX-only required list, mark
`CX_ENDPOINT` and `ARIZE_COLLECTOR_ENDPOINT` as optional with defaults, and add
a short note explaining that the Arize/S3 vars are only needed if the Arize/S3
configuration is enabled.
- Line 46: The README currently instructs using branch
coralogix-autoinstrumentation but the build script
(./scripts/build_nodejs_layer.sh / scripts/build-nodejs.sh) actually checks out
a pinned SHA (3a9691a699ddd06c3644eec70bf4b50cc4217ba3); update the README text
to state that the build uses a specific pinned commit for reproducibility,
mention that OPENTELEMETRY_JS_CONTRIB_PATH can point to a sibling checkout but
the script will reset/checkout the pinned SHA, and include the exact SHA used
(3a9691a699ddd06c3644eec70bf4b50cc4217ba3) and a brief note linking to
.github/workflows/publish-extend-otel-layer.yml for the published flow.
In `@scripts/build_nodejs_layer.sh`:
- Around line 87-88: The global npm install command `npm install -g copyfiles
bestzip rimraf` should pin specific versions to ensure reproducible builds;
update the script (the line installing global tools) to install explicit
versions (e.g., `copyfiles@x.y.z bestzip@x.y.z rimraf@x.y.z`) or read versions
from variables at the top of the script, and/or move these tools into
package.json devDependencies and use npm scripts (or npx) instead of globally
installing to guarantee consistent tool versions across environments.
- Around line 96-97: The blanket "|| true" at the end of the find+rm command
hides all failures; remove it and instead suppress only the expected "No such
file or directory" messages by filtering stderr for that pattern. Replace the
existing "find node_modules -type d ... -exec rm -rf {} + 2>/dev/null || true"
invocation with one that captures stderr and filters out the specific benign
message (e.g., redirect stderr through grep -v "No such file or directory" to
stderr) so permission or other real errors still surface; update the line
containing the find ... -exec rm -rf invocation accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 980b169a-23c8-4ab9-abaa-3383e58a9c0e
⛔ Files ignored due to path filters (7)
collector/go.sumis excluded by!**/*.sumcollector/lambdacomponents/go.sumis excluded by!**/*.sumcollector/processor/coldstartprocessor/go.sumis excluded by!**/*.sumcollector/processor/decoupleprocessor/go.sumis excluded by!**/*.sumcollector/receiver/telemetryapireceiver/go.sumis excluded by!**/*.sumgo/sample-apps/function/go.sumis excluded by!**/*.sumjava/gradle/wrapper/gradle-wrapper.jaris excluded by!**/*.jar
📒 Files selected for processing (171)
.github/CODEOWNERS.github/ISSUE_TEMPLATE/bug_report.md.github/ISSUE_TEMPLATE/feature_request.md.github/dependabot.yml.github/release.yml.github/workflows/ci-collector.yml.github/workflows/ci-java.yml.github/workflows/ci-nodejs.yml.github/workflows/ci-python.yml.github/workflows/ci-shellcheck.yml.github/workflows/ci-terraform.yml.github/workflows/close-stale.yaml.github/workflows/codeql.yml.github/workflows/layer-publish.yml.github/workflows/publish-extend-otel-layer.yml.github/workflows/publish-nodejs.yml.github/workflows/release-layer-collector.yml.github/workflows/release-layer-java.yml.github/workflows/release-layer-nodejs.yml.github/workflows/release-layer-python.yml.github/workflows/release-layer-ruby.yml.gitignoreCONTRIBUTING.mdREADME.mdRELEASE.mdUPSTREAM.mdci-scripts/publish_layer.shci-scripts/publish_production.shci-scripts/publish_test.shcollector/Makefilecollector/Makefile.Commoncollector/config.yamlcollector/go.modcollector/lambdacomponents/default.gocollector/lambdacomponents/exporter/logging.gocollector/lambdacomponents/go.modcollector/processor/coldstartprocessor/factory.gocollector/processor/coldstartprocessor/go.modcollector/processor/decoupleprocessor/factory.gocollector/processor/decoupleprocessor/go.modcollector/receiver/telemetryapireceiver/go.moddev/build-nodejs.shdocs/design_proposal.mddotnet/README.mddotnet/sample-apps/aws-sdk/deploy/wrapper/main.tfdotnet/sample-apps/aws-sdk/deploy/wrapper/outputs.tfdotnet/sample-apps/aws-sdk/deploy/wrapper/variables.tfdotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample.slndotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample/AwsSdkSample.csprojdotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample/Function.csdotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample/Properties/launchSettings.jsondotnet/sample-apps/aws-sdk/wrapper/SampleApps/build.shextend/README.mdextend/collector-config-cx-arize-s3.yamlextend/collector-config-cx-arize.yamlextend/collector-config-cx-only.yamlgo/README.mdgo/sample-apps/aws-sdk/deploy/wrapper/main.tfgo/sample-apps/aws-sdk/deploy/wrapper/outputs.tfgo/sample-apps/aws-sdk/deploy/wrapper/variables.tfgo/sample-apps/function/build.shgo/sample-apps/function/function.gogo/sample-apps/function/go.modjava/README.mdjava/awssdk-autoconfigure/build.gradle.ktsjava/awssdk-autoconfigure/src/main/java/io/opentelemetry/instrumentation/awssdk/v2_2/autoconfigure/AutoconfiguredTracingExecutionInterceptor.javajava/awssdk-autoconfigure/src/main/resources/software/amazon/awssdk/global/handlers/execution.interceptorsjava/build.gradle.ktsjava/dependencyManagement/build.gradle.ktsjava/gradle.propertiesjava/gradle/wrapper/gradle-wrapper.propertiesjava/gradlewjava/gradlew.batjava/layer-javaagent/build.gradle.ktsjava/layer-javaagent/scripts/otel-handlerjava/layer-wrapper/build.gradle.ktsjava/layer-wrapper/scripts/otel-handlerjava/layer-wrapper/scripts/otel-proxy-handlerjava/layer-wrapper/scripts/otel-sqs-handlerjava/layer-wrapper/scripts/otel-stream-handlerjava/sample-apps/aws-sdk/README.mdjava/sample-apps/aws-sdk/build.gradle.ktsjava/sample-apps/aws-sdk/deploy/agent/main.tfjava/sample-apps/aws-sdk/deploy/agent/outputs.tfjava/sample-apps/aws-sdk/deploy/agent/variables.tfjava/sample-apps/aws-sdk/deploy/wrapper/main.tfjava/sample-apps/aws-sdk/deploy/wrapper/outputs.tfjava/sample-apps/aws-sdk/deploy/wrapper/variables.tfjava/sample-apps/aws-sdk/src/main/java/io/opentelemetry/lambda/sampleapps/awssdk/AwsSdkRequestHandler.javajava/sample-apps/aws-sdk/src/main/resources/log4j2.xmljava/sample-apps/okhttp/README.mdjava/sample-apps/okhttp/build.gradle.ktsjava/sample-apps/okhttp/deploy/wrapper/main.tfjava/sample-apps/okhttp/deploy/wrapper/outputs.tfjava/sample-apps/okhttp/deploy/wrapper/variables.tfjava/sample-apps/okhttp/src/main/java/io/opentelemetry/lambda/sampleapps/okhttp/OkHttpRequestHandler.javajava/sample-apps/okhttp/src/main/resources/log4j2.xmljava/sample-apps/sqs/README.mdjava/sample-apps/sqs/build.gradle.ktsjava/sample-apps/sqs/deploy/agent/main.tfjava/sample-apps/sqs/deploy/agent/outputs.tfjava/sample-apps/sqs/deploy/agent/variables.tfjava/sample-apps/sqs/deploy/wrapper/main.tfjava/sample-apps/sqs/deploy/wrapper/outputs.tfjava/sample-apps/sqs/deploy/wrapper/variables.tfjava/sample-apps/sqs/src/main/java/io/opentelemetry/lambda/sampleapps/sqs/SqsRequestHandler.javajava/sample-apps/sqs/src/main/resources/log4j2.xmljava/settings.gradle.ktsnodejs/.commitlintrc.ymlnodejs/.editorconfignodejs/.gitattributesnodejs/.npmignorenodejs/README.mdnodejs/eslint.config.jsnodejs/lerna.jsonnodejs/packages/cx-wrapper/package.jsonnodejs/packages/layer/package.jsonnodejs/sample-apps/aws-sdk/.eslintignorenodejs/sample-apps/aws-sdk/.eslintrc.jsnodejs/sample-apps/aws-sdk/README.mdnodejs/sample-apps/aws-sdk/config.yamlnodejs/sample-apps/aws-sdk/deploy/wrapper/main.tfnodejs/sample-apps/aws-sdk/deploy/wrapper/outputs.tfnodejs/sample-apps/aws-sdk/deploy/wrapper/variables.tfnodejs/sample-apps/aws-sdk/package.jsonnodejs/sample-apps/aws-sdk/src/index.tsnodejs/sample-apps/aws-sdk/tsconfig.jsonpython/README.mdpython/sample-apps/aws-sdk/deploy/wrapper/main.tfpython/sample-apps/aws-sdk/deploy/wrapper/outputs.tfpython/sample-apps/aws-sdk/deploy/wrapper/variables.tfpython/sample-apps/build.shpython/sample-apps/function/lambda_function.pypython/sample-apps/function/requirements.txtpython/sample-apps/run.shpython/sample-apps/template.ymlpython/src/build.shpython/src/otel/Dockerfilepython/src/otel/Makefilepython/src/otel/otel_sdk/nodeps-requirements.txtpython/src/otel/otel_sdk/otel-instrumentpython/src/otel/otel_sdk/otel_wrapper.pypython/src/otel/otel_sdk/requirements.txtpython/src/otel/tests/mocks/lambda_function.pypython/src/otel/tests/nodeps-requirements.txtpython/src/otel/tests/requirements.txtpython/src/otel/tests/test_otel.pypython/src/run.shpython/src/template.ymlpython/src/tox.iniruby/README.mdruby/sample-apps/function/Gemfileruby/sample-apps/function/lambda_function.rbruby/sample-apps/template.ymlruby/src/build.shruby/src/otel/Dockerfileruby/src/otel/layer/Gemfileruby/src/otel/layer/Makefileruby/src/otel/layer/otel-handlerruby/src/otel/layer/wrapper.rbruby/src/template.ymlruby/src/zip_ruby_layer.shscripts/build-nodejs.shscripts/build_nodejs_layer.shscripts/check_size.shscripts/deploy-nodejs.shscripts/publish-sandbox.shutils/sam/run.shutils/terraform/api-gateway-proxy/main.tfutils/terraform/api-gateway-proxy/outputs.tfutils/terraform/api-gateway-proxy/variables.tf
💤 Files with no reviewable changes (142)
- .github/ISSUE_TEMPLATE/feature_request.md
- CONTRIBUTING.md
- nodejs/sample-apps/aws-sdk/.eslintignore
- .github/ISSUE_TEMPLATE/bug_report.md
- java/awssdk-autoconfigure/src/main/resources/software/amazon/awssdk/global/handlers/execution.interceptors
- .github/workflows/ci-shellcheck.yml
- RELEASE.md
- dotnet/README.md
- .github/release.yml
- java/gradle/wrapper/gradle-wrapper.properties
- ci-scripts/publish_test.sh
- java/awssdk-autoconfigure/build.gradle.kts
- java/sample-apps/sqs/deploy/wrapper/outputs.tf
- python/src/otel/otel_sdk/nodeps-requirements.txt
- java/sample-apps/sqs/src/main/resources/log4j2.xml
- java/sample-apps/sqs/deploy/agent/outputs.tf
- nodejs/sample-apps/aws-sdk/.eslintrc.js
- .github/workflows/ci-nodejs.yml
- python/src/build.sh
- dotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample.sln
- java/sample-apps/aws-sdk/src/main/resources/log4j2.xml
- python/sample-apps/aws-sdk/deploy/wrapper/outputs.tf
- python/sample-apps/function/requirements.txt
- java/sample-apps/sqs/src/main/java/io/opentelemetry/lambda/sampleapps/sqs/SqsRequestHandler.java
- python/src/otel/tests/requirements.txt
- python/src/otel/Dockerfile
- dotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample/AwsSdkSample.csproj
- python/src/run.sh
- go/sample-apps/function/build.sh
- ruby/sample-apps/function/Gemfile
- .github/workflows/ci-terraform.yml
- .github/workflows/release-layer-nodejs.yml
- python/sample-apps/run.sh
- nodejs/.editorconfig
- java/sample-apps/aws-sdk/deploy/wrapper/main.tf
- collector/config.yaml
- nodejs/.gitattributes
- python/src/otel/tests/nodeps-requirements.txt
- nodejs/.commitlintrc.yml
- python/sample-apps/build.sh
- ci-scripts/publish_production.sh
- .github/workflows/ci-java.yml
- go/sample-apps/aws-sdk/deploy/wrapper/main.tf
- nodejs/.npmignore
- java/sample-apps/sqs/build.gradle.kts
- .github/workflows/release-layer-java.yml
- .github/workflows/codeql.yml
- java/sample-apps/okhttp/deploy/wrapper/outputs.tf
- java/sample-apps/aws-sdk/deploy/wrapper/outputs.tf
- java/sample-apps/okhttp/build.gradle.kts
- java/sample-apps/okhttp/README.md
- nodejs/sample-apps/aws-sdk/tsconfig.json
- java/sample-apps/aws-sdk/deploy/agent/outputs.tf
- java/README.md
- nodejs/lerna.json
- java/gradle.properties
- python/src/otel/tests/mocks/lambda_function.py
- .github/workflows/publish-nodejs.yml
- .github/workflows/release-layer-ruby.yml
- java/layer-wrapper/scripts/otel-sqs-handler
- ci-scripts/publish_layer.sh
- ruby/src/otel/layer/Gemfile
- ruby/src/otel/layer/Makefile
- ruby/src/build.sh
- ruby/sample-apps/function/lambda_function.rb
- java/gradlew.bat
- java/layer-wrapper/build.gradle.kts
- java/gradlew
- utils/terraform/api-gateway-proxy/outputs.tf
- .github/workflows/release-layer-collector.yml
- dotnet/sample-apps/aws-sdk/deploy/wrapper/main.tf
- dev/build-nodejs.sh
- java/settings.gradle.kts
- nodejs/sample-apps/aws-sdk/README.md
- java/sample-apps/okhttp/deploy/wrapper/variables.tf
- python/sample-apps/function/lambda_function.py
- dotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample/Function.cs
- python/src/otel/otel_sdk/requirements.txt
- java/sample-apps/okhttp/src/main/resources/log4j2.xml
- java/layer-javaagent/build.gradle.kts
- ruby/src/zip_ruby_layer.sh
- ruby/src/otel/layer/otel-handler
- java/sample-apps/aws-sdk/README.md
- ruby/src/otel/layer/wrapper.rb
- go/sample-apps/function/function.go
- .github/workflows/ci-collector.yml
- java/layer-wrapper/scripts/otel-stream-handler
- nodejs/sample-apps/aws-sdk/src/index.ts
- go/sample-apps/aws-sdk/deploy/wrapper/variables.tf
- nodejs/eslint.config.js
- java/layer-wrapper/scripts/otel-proxy-handler
- nodejs/sample-apps/aws-sdk/config.yaml
- utils/terraform/api-gateway-proxy/main.tf
- go/sample-apps/aws-sdk/deploy/wrapper/outputs.tf
- .github/workflows/layer-publish.yml
- dotnet/sample-apps/aws-sdk/deploy/wrapper/outputs.tf
- nodejs/sample-apps/aws-sdk/package.json
- python/src/otel/Makefile
- java/layer-javaagent/scripts/otel-handler
- ruby/src/otel/Dockerfile
- java/sample-apps/okhttp/deploy/wrapper/main.tf
- python/src/template.yml
- python/README.md
- python/src/tox.ini
- java/sample-apps/aws-sdk/deploy/agent/variables.tf
- ruby/src/template.yml
- nodejs/sample-apps/aws-sdk/deploy/wrapper/variables.tf
- java/sample-apps/aws-sdk/src/main/java/io/opentelemetry/lambda/sampleapps/awssdk/AwsSdkRequestHandler.java
- java/build.gradle.kts
- java/dependencyManagement/build.gradle.kts
- java/sample-apps/sqs/deploy/wrapper/main.tf
- .github/workflows/release-layer-python.yml
- utils/sam/run.sh
- java/sample-apps/sqs/deploy/wrapper/variables.tf
- go/sample-apps/function/go.mod
- ruby/sample-apps/template.yml
- python/src/otel/otel_sdk/otel-instrument
- python/sample-apps/template.yml
- java/sample-apps/sqs/deploy/agent/variables.tf
- java/sample-apps/aws-sdk/deploy/wrapper/variables.tf
- java/sample-apps/sqs/deploy/agent/main.tf
- java/sample-apps/sqs/README.md
- nodejs/sample-apps/aws-sdk/deploy/wrapper/main.tf
- nodejs/sample-apps/aws-sdk/deploy/wrapper/outputs.tf
- utils/terraform/api-gateway-proxy/variables.tf
- .github/workflows/close-stale.yaml
- java/layer-wrapper/scripts/otel-handler
- java/sample-apps/aws-sdk/deploy/agent/main.tf
- go/README.md
- python/sample-apps/aws-sdk/deploy/wrapper/main.tf
- python/sample-apps/aws-sdk/deploy/wrapper/variables.tf
- python/src/otel/tests/test_otel.py
- docs/design_proposal.md
- java/sample-apps/okhttp/src/main/java/io/opentelemetry/lambda/sampleapps/okhttp/OkHttpRequestHandler.java
- dotnet/sample-apps/aws-sdk/wrapper/SampleApps/AwsSdkSample/Properties/launchSettings.json
- ruby/README.md
- java/awssdk-autoconfigure/src/main/java/io/opentelemetry/instrumentation/awssdk/v2_2/autoconfigure/AutoconfiguredTracingExecutionInterceptor.java
- .github/workflows/ci-python.yml
- dotnet/sample-apps/aws-sdk/deploy/wrapper/variables.tf
- python/src/otel/otel_sdk/otel_wrapper.py
- dotnet/sample-apps/aws-sdk/wrapper/SampleApps/build.sh
- java/sample-apps/aws-sdk/build.gradle.kts
| registries: | ||
| - codeartifact |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Read-only verification: locate how CODEARTIFACT_AUTH_TOKEN is sourced/rotated in-repo.
# Expectation: some documented or automated rotation path should exist.
set -euo pipefail
echo "== References to CODEARTIFACT_AUTH_TOKEN and CodeArtifact =="
rg -n "CODEARTIFACT_AUTH_TOKEN|codeartifact|dependabot secret|dependabot-secret|gh secret" -C 2 .github scripts README* UPSTREAM.md || true
echo
echo "== Workflow files that may rotate/update secrets =="
fd -HI ".yml" .github/workflows -x rg -n "dependabot|CODEARTIFACT|codeartifact|gh secret" {} || trueRepository: helloextend/opentelemetry-lambda
Length of output: 829
🌐 Web query:
For GitHub Dependabot npm registries, what is the maximum lifetime of AWS CodeArtifact authorization tokens, and is automatic token refresh supported directly by Dependabot?
💡 Result:
For GitHub Dependabot npm registries with AWS CodeArtifact, the maximum lifetime of authorization tokens is 12 hours (configurable between 15 minutes and 12 hours). Automatic token refresh is not supported directly by Dependabot when using static tokens stored as secrets; users must refresh them manually or via external GitHub Actions workflows scheduled to run before expiration (e.g., every 10-11 hours). However, since February 2026, Dependabot supports OpenID Connect (OIDC) authentication for AWS CodeArtifact, allowing dynamic generation of short-lived tokens without storing long-lived credentials, effectively handling token refresh automatically via OIDC federation (requires configuring aws-region, account-id, role-name, domain, domain-owner in dependabot.yml).
Citations:
- 1: https://docs.aws.amazon.com/codeartifact/latest/ug/tokens-authentication.html
- 2: https://docs.github.com/en/code-security/dependabot/working-with-dependabot/configuring-access-to-private-registries-for-dependabot?learn=dependency_version_updates&learnProduct=code-security
- 3: https://github.blog/changelog/2026-02-03-dependabot-now-supports-oidc-authentication/
🏁 Script executed:
cat .github/dependabot.ymlRepository: helloextend/opentelemetry-lambda
Length of output: 1616
🏁 Script executed:
# Check for any GitHub Actions that might rotate the CodeArtifact token
fd -HI ".yml" .github/workflows -x grep -l "CODEARTIFACT\|token.*rotate\|secret.*update" {} || echo "No rotation workflows found"Repository: helloextend/opentelemetry-lambda
Length of output: 103
🏁 Script executed:
# Search for any documentation about the CodeArtifact setup
rg -i "codeartifact.*token\|token.*rotation\|token.*refresh" --type markdown --type txt . || echo "No documentation found"Repository: helloextend/opentelemetry-lambda
Length of output: 98
Implement CodeArtifact token refresh strategy or migrate to OIDC authentication.
AWS CodeArtifact tokens expire within 12 hours. With replaces-base: true, Dependabot cannot fall back to the public npm registry, halting all npm updates until the token is manually rotated. No automated rotation mechanism is currently in place.
Two solutions:
- Scheduled rotation (short-term): Add a GitHub Actions workflow to refresh the token every 10–11 hours and update the
CODEARTIFACT_AUTH_TOKENsecret. - OIDC authentication (preferred): Since February 2026, Dependabot supports OpenID Connect (OIDC) for AWS CodeArtifact, eliminating manual token rotation. Configure
aws-region,account-id,role-name,domain, anddomain-ownerin the codeartifact registry block instead oftoken.
Also applies to: 57-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/dependabot.yml around lines 38 - 39, Dependabot is using a
CodeArtifact registry token that expires and blocks npm updates; either add a
scheduled token-rotation Action that refreshes CODEARTIFACT_AUTH_TOKEN every
~10–11 hours and updates the repository secret, or migrate the dependabot
registry block (the registries: - codeartifact entry and its codeartifact
config) to OIDC by replacing the static token with the new fields aws-region,
account-id, role-name, domain, and domain-owner so Dependabot can assume an AWS
role via OIDC and eliminate manual rotation; implement one of these strategies
and update the .github/dependabot.yml codeartifact registry entries accordingly.
| env: | ||
| AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} |
There was a problem hiding this comment.
Scope AWS credentials to the publish job only.
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are exported at workflow scope, so the collector build and npm-based Node.js build both receive publish credentials they never use. That unnecessarily expands the blast radius if a dependency or action is compromised. Move these secrets onto package-and-publish or just the publish step.
Suggested fix
-env:
- AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
- AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
-
jobs:
@@
package-and-publish:
needs: [build-collector, build-nodejs]
runs-on: blacksmith-4vcpu-ubuntu-2404
+ env:
+ AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish-extend-otel-layer.yml around lines 20 - 22, The
workflow currently exposes AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY at
workflow-level env; restrict them by removing those two keys from the top-level
env and add them only to the job "package-and-publish" (or the specific publish
step) so only the publish task receives secrets. Update the env block for the
"package-and-publish" job (or the publish step) to include AWS_ACCESS_KEY_ID:
${{ secrets.AWS_ACCESS_KEY_ID }} and AWS_SECRET_ACCESS_KEY: ${{
secrets.AWS_SECRET_ACCESS_KEY }} and ensure other jobs (collector build /
Node.js build) no longer inherit those env vars.
| - uses: actions/download-artifact@v4 | ||
| with: { name: collector-${{ matrix.architecture }}, path: dl/collector } | ||
| - uses: actions/download-artifact@v4 | ||
| with: { name: nodejs-layer, path: dl/nodejs } |
There was a problem hiding this comment.
Fix the invalid inline with: mappings.
These with: lines do not parse as valid YAML here, which matches the actionlint/yamllint failures. The workflow will not load until they’re rewritten as block mappings.
Suggested fix
- uses: actions/download-artifact@v4
- with: { name: collector-${{ matrix.architecture }}, path: dl/collector }
+ with:
+ name: collector-${{ matrix.architecture }}
+ path: dl/collector
- uses: actions/download-artifact@v4
- with: { name: nodejs-layer, path: dl/nodejs }
+ with:
+ name: nodejs-layer
+ path: dl/nodejs📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - uses: actions/download-artifact@v4 | |
| with: { name: collector-${{ matrix.architecture }}, path: dl/collector } | |
| - uses: actions/download-artifact@v4 | |
| with: { name: nodejs-layer, path: dl/nodejs } | |
| - uses: actions/download-artifact@v4 | |
| with: | |
| name: collector-${{ matrix.architecture }} | |
| path: dl/collector | |
| - uses: actions/download-artifact@v4 | |
| with: | |
| name: nodejs-layer | |
| path: dl/nodejs |
🧰 Tools
🪛 actionlint (1.7.12)
[error] 76-76: could not parse as YAML: did not find expected ',' or '}'
(syntax-check)
🪛 YAMLlint (1.38.0)
[error] 77-77: syntax error: expected ',' or '}', but got '{'
(syntax)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish-extend-otel-layer.yml around lines 76 - 79, The
inline `with:` mappings for the actions/download-artifact@v4 steps are invalid
YAML; replace the inline braces with proper block mappings so they parse
correctly. Update the two occurrences that use `with: { name: collector-${{
matrix.architecture }}, path: dl/collector }` and `with: { name: nodejs-layer,
path: dl/nodejs }` to use block-style keys under `with:` (name: <value> and
path: <value>) for the corresponding actions/download-artifact@v4 steps so
actionlint/yamllint pass.
| github.com/stretchr/testify v1.11.1 | ||
| go.opentelemetry.io/collector/client v1.56.0 | ||
| go.opentelemetry.io/collector/component v1.56.0 | ||
| go.opentelemetry.io/collector/confmap v1.16.0 | ||
| go.opentelemetry.io/collector/consumer v0.109.0 | ||
| go.opentelemetry.io/collector/consumer/consumertest v0.109.0 | ||
| go.opentelemetry.io/collector/pdata v1.15.0 | ||
| go.opentelemetry.io/collector/processor v0.109.0 | ||
| go.uber.org/zap v1.27.0 | ||
| go.opentelemetry.io/collector/consumer v1.56.0 | ||
| go.opentelemetry.io/collector/consumer/consumertest v0.150.0 | ||
| go.opentelemetry.io/collector/pdata v1.56.0 | ||
| go.opentelemetry.io/collector/processor v1.56.0 | ||
| go.opentelemetry.io/collector/processor/processorhelper v0.150.0 | ||
| go.opentelemetry.io/collector/processor/processortest v0.150.0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "==> confmap versions across collector modules"
rg -n 'go\.opentelemetry\.io/collector/confmap v' --glob 'collector/**/go.mod'
echo
echo "==> decoupleprocessor references to confmap"
rg -n 'go\.opentelemetry\.io/collector/confmap' collector/processor/decoupleprocessor -g '!**/*.sum'Repository: helloextend/opentelemetry-lambda
Length of output: 726
Bump confmap with the rest of the Collector stack.
Line 12 pins go.opentelemetry.io/collector/confmap to v1.16.0 while the surrounding Collector modules were upgraded to v1.56.0. The receiver (collector/receiver/telemetryapireceiver/go.mod) and root (collector/go.mod) already use v1.56.0. This creates unnecessary version skew and appears to be an accidental partial upgrade.
Suggested fix
- go.opentelemetry.io/collector/confmap v1.16.0
+ go.opentelemetry.io/collector/confmap v1.56.0📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| github.com/stretchr/testify v1.11.1 | |
| go.opentelemetry.io/collector/client v1.56.0 | |
| go.opentelemetry.io/collector/component v1.56.0 | |
| go.opentelemetry.io/collector/confmap v1.16.0 | |
| go.opentelemetry.io/collector/consumer v0.109.0 | |
| go.opentelemetry.io/collector/consumer/consumertest v0.109.0 | |
| go.opentelemetry.io/collector/pdata v1.15.0 | |
| go.opentelemetry.io/collector/processor v0.109.0 | |
| go.uber.org/zap v1.27.0 | |
| go.opentelemetry.io/collector/consumer v1.56.0 | |
| go.opentelemetry.io/collector/consumer/consumertest v0.150.0 | |
| go.opentelemetry.io/collector/pdata v1.56.0 | |
| go.opentelemetry.io/collector/processor v1.56.0 | |
| go.opentelemetry.io/collector/processor/processorhelper v0.150.0 | |
| go.opentelemetry.io/collector/processor/processortest v0.150.0 | |
| github.com/stretchr/testify v1.11.1 | |
| go.opentelemetry.io/collector/client v1.56.0 | |
| go.opentelemetry.io/collector/component v1.56.0 | |
| go.opentelemetry.io/collector/confmap v1.56.0 | |
| go.opentelemetry.io/collector/consumer v1.56.0 | |
| go.opentelemetry.io/collector/consumer/consumertest v0.150.0 | |
| go.opentelemetry.io/collector/pdata v1.56.0 | |
| go.opentelemetry.io/collector/processor v1.56.0 | |
| go.opentelemetry.io/collector/processor/processorhelper v0.150.0 | |
| go.opentelemetry.io/collector/processor/processortest v0.150.0 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@collector/processor/decoupleprocessor/go.mod` around lines 9 - 18, The go.mod
entry for the OpenTelemetry confmap module is pinned to v1.16.0 causing version
skew with the rest of the Collector stack; update the module line for
go.opentelemetry.io/collector/confmap to v1.56.0 to match the other collector
modules (e.g., the surrounding go.opentelemetry.io/collector/* entries), then
run go mod tidy to reconcile dependencies and verify builds/tests.
| "@opentelemetry/exporter-metrics-otlp-proto": "0.213.0", | ||
| "@opentelemetry/exporter-trace-otlp-proto": "0.213.0", | ||
| "@opentelemetry/instrumentation": "file:../../../opentelemetry-js/experimental/packages/opentelemetry-instrumentation/opentelemetry-instrumentation-0.213.0.tgz", | ||
| "@opentelemetry/instrumentation": "0.214.0", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For OpenTelemetry JS experimental packages (0.x), is mixing @opentelemetry/instrumentation@0.214.0with@opentelemetry/instrumentation-http@0.213.0and@opentelemetry/exporter-trace-otlp-proto@0.213.0 officially supported?
💡 Result:
No, mixing @opentelemetry/instrumentation@0.214.0 with @opentelemetry/instrumentation-http@0.213.0 and @opentelemetry/exporter-trace-otlp-proto@0.213.0 is not officially supported for OpenTelemetry JS experimental (0.x) packages.
Citations:
- 1: https://github.com/open-telemetry/opentelemetry-js
- 2: https://www.npmjs.com/package/@opentelemetry/instrumentation
- 3: https://github.com/open-telemetry/opentelemetry-js/blob/main/experimental/CHANGELOG.md
- 4: https://www.npmjs.com/package/@opentelemetry/instrumentation-http
- 5: https://registry.npmjs.org/%40opentelemetry%2Finstrumentation-http
- 6: https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-proto
🏁 Script executed:
cat nodejs/packages/cx-wrapper/package.json | sed -n '30,50p'Repository: helloextend/opentelemetry-lambda
Length of output: 1436
Resolve OpenTelemetry instrumentation version incompatibility before release.
Line 38 upgrades @opentelemetry/instrumentation to 0.214.0, while related OTel experimental packages remain at 0.213.0 (@opentelemetry/instrumentation-http, @opentelemetry/instrumentation-grpc, and @opentelemetry/exporter-trace-otlp-proto). This version mix is not officially supported for OpenTelemetry JS 0.x packages and will cause runtime incompatibilities.
Either revert @opentelemetry/instrumentation to 0.213.0 or upgrade all related experimental packages to 0.214.0 together.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nodejs/packages/cx-wrapper/package.json` at line 38, The package version
mismatch for OpenTelemetry will cause runtime incompatibilities: update the
dependency set so all related OTel packages use the same minor version—either
revert "@opentelemetry/instrumentation" back to "0.213.0" or bump
"@opentelemetry/instrumentation-http", "@opentelemetry/instrumentation-grpc",
and "@opentelemetry/exporter-trace-otlp-proto" to "0.214.0" so all four packages
share the identical 0.214.0 (or all remain 0.213.0) version; modify the
package.json entries for those specific package names accordingly and run
install to verify no peer/version conflicts remain.
| CX_CONTRIB_CACHE="${OPENTELEMETRY_JS_CONTRIB_PATH:-$ROOT_DIR/.build-cache/opentelemetry-js-contrib}" | ||
|
|
||
| echo "==> resolving cx-contrib fork at $CX_CONTRIB_SHA" | ||
| if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then | ||
| mkdir -p "$(dirname "$CX_CONTRIB_CACHE")" | ||
| git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE" | ||
| fi | ||
| git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin | ||
| git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA" |
There was a problem hiding this comment.
Don’t rewrite a caller-supplied contrib checkout.
If OPENTELEMETRY_JS_CONTRIB_PATH is set for local dev, this still fetches and checkouts the pinned SHA. That detaches or overwrites the developer’s branch, so the advertised override is not actually usable for “a local checkout you are hacking on”. Only auto-resolve the pinned SHA when the env var is unset; otherwise just validate the supplied path and reuse it.
Suggested fix
-CX_CONTRIB_CACHE="${OPENTELEMETRY_JS_CONTRIB_PATH:-$ROOT_DIR/.build-cache/opentelemetry-js-contrib}"
+USER_CONTRIB_PATH="${OPENTELEMETRY_JS_CONTRIB_PATH:-}"
+CX_CONTRIB_CACHE="${USER_CONTRIB_PATH:-$ROOT_DIR/.build-cache/opentelemetry-js-contrib}"
echo "==> resolving cx-contrib fork at $CX_CONTRIB_SHA"
-if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then
- mkdir -p "$(dirname "$CX_CONTRIB_CACHE")"
- git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE"
-fi
-git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin
-git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA"
+if [ -z "$USER_CONTRIB_PATH" ]; then
+ if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then
+ mkdir -p "$(dirname "$CX_CONTRIB_CACHE")"
+ git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE"
+ fi
+ git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin
+ git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA"
+elif ! git -C "$CX_CONTRIB_CACHE" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
+ echo "OPENTELEMETRY_JS_CONTRIB_PATH must point to a git checkout" >&2
+ exit 1
+fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/build-nodejs.sh` around lines 21 - 29, The script currently always
treats CX_CONTRIB_CACHE (set from OPENTELEMETRY_JS_CONTRIB_PATH or default) as a
cache to fetch and checkout CX_CONTRIB_SHA which will detach/overwrite a
caller-supplied local checkout; change the logic so when
OPENTELEMETRY_JS_CONTRIB_PATH is set you only validate that CX_CONTRIB_CACHE is
a git repo with the expected commit (or at least exists and is readable) and
then reuse it without running git fetch/checkout; only perform the
fetch/checkout workflow (using CX_CONTRIB_REPO, git clone, git fetch, git
checkout with CX_CONTRIB_SHA) when OPENTELEMETRY_JS_CONTRIB_PATH is unset and
you are auto-resolving into the build cache, leaving a developer’s local
checkout untouched.
| CX_CONTRIB_CACHE="${OPENTELEMETRY_JS_CONTRIB_PATH:-.build-cache/opentelemetry-js-contrib}" | ||
|
|
||
| case "$ARCH" in | ||
| amd64) AWS_ARCH="x86_64" ;; | ||
| arm64) AWS_ARCH="arm64" ;; | ||
| *) echo "unsupported arch: $ARCH"; exit 1 ;; | ||
| esac | ||
|
|
||
| echo "==> resolving cx-contrib fork at $CX_CONTRIB_SHA" | ||
| if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then | ||
| mkdir -p "$(dirname "$CX_CONTRIB_CACHE")" | ||
| git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE" | ||
| fi | ||
| git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin | ||
| git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA" | ||
| export OPENTELEMETRY_JS_CONTRIB_PATH | ||
| OPENTELEMETRY_JS_CONTRIB_PATH="$(cd "$CX_CONTRIB_CACHE" && pwd)" |
There was a problem hiding this comment.
Respect the local contrib override here too.
Line 21 advertises OPENTELEMETRY_JS_CONTRIB_PATH for local development, but this path still gets fetched and checked out to the pinned SHA. That mutates the caller’s repo/worktree and makes the override unusable for testing local changes. Mirror the same “only pin when the env var is unset” behavior here.
Suggested fix
-CX_CONTRIB_CACHE="${OPENTELEMETRY_JS_CONTRIB_PATH:-.build-cache/opentelemetry-js-contrib}"
+USER_CONTRIB_PATH="${OPENTELEMETRY_JS_CONTRIB_PATH:-}"
+CX_CONTRIB_CACHE="${USER_CONTRIB_PATH:-.build-cache/opentelemetry-js-contrib}"
@@
echo "==> resolving cx-contrib fork at $CX_CONTRIB_SHA"
-if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then
- mkdir -p "$(dirname "$CX_CONTRIB_CACHE")"
- git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE"
-fi
-git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin
-git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA"
+if [ -z "$USER_CONTRIB_PATH" ]; then
+ if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then
+ mkdir -p "$(dirname "$CX_CONTRIB_CACHE")"
+ git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE"
+ fi
+ git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin
+ git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA"
+elif ! git -C "$CX_CONTRIB_CACHE" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
+ echo "OPENTELEMETRY_JS_CONTRIB_PATH must point to a git checkout" >&2
+ exit 1
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CX_CONTRIB_CACHE="${OPENTELEMETRY_JS_CONTRIB_PATH:-.build-cache/opentelemetry-js-contrib}" | |
| case "$ARCH" in | |
| amd64) AWS_ARCH="x86_64" ;; | |
| arm64) AWS_ARCH="arm64" ;; | |
| *) echo "unsupported arch: $ARCH"; exit 1 ;; | |
| esac | |
| echo "==> resolving cx-contrib fork at $CX_CONTRIB_SHA" | |
| if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then | |
| mkdir -p "$(dirname "$CX_CONTRIB_CACHE")" | |
| git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE" | |
| fi | |
| git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin | |
| git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA" | |
| export OPENTELEMETRY_JS_CONTRIB_PATH | |
| OPENTELEMETRY_JS_CONTRIB_PATH="$(cd "$CX_CONTRIB_CACHE" && pwd)" | |
| USER_CONTRIB_PATH="${OPENTELEMETRY_JS_CONTRIB_PATH:-}" | |
| CX_CONTRIB_CACHE="${USER_CONTRIB_PATH:-.build-cache/opentelemetry-js-contrib}" | |
| case "$ARCH" in | |
| amd64) AWS_ARCH="x86_64" ;; | |
| arm64) AWS_ARCH="arm64" ;; | |
| *) echo "unsupported arch: $ARCH"; exit 1 ;; | |
| esac | |
| echo "==> resolving cx-contrib fork at $CX_CONTRIB_SHA" | |
| if [ -z "$USER_CONTRIB_PATH" ]; then | |
| if [ ! -d "$CX_CONTRIB_CACHE/.git" ]; then | |
| mkdir -p "$(dirname "$CX_CONTRIB_CACHE")" | |
| git clone --filter=blob:none "$CX_CONTRIB_REPO" "$CX_CONTRIB_CACHE" | |
| fi | |
| git -C "$CX_CONTRIB_CACHE" fetch --quiet origin "$CX_CONTRIB_SHA" 2>/dev/null || git -C "$CX_CONTRIB_CACHE" fetch --quiet origin | |
| git -C "$CX_CONTRIB_CACHE" checkout --quiet "$CX_CONTRIB_SHA" | |
| elif ! git -C "$CX_CONTRIB_CACHE" rev-parse --is-inside-work-tree >/dev/null 2>&1; then | |
| echo "OPENTELEMETRY_JS_CONTRIB_PATH must point to a git checkout" >&2 | |
| exit 1 | |
| fi | |
| export OPENTELEMETRY_JS_CONTRIB_PATH | |
| OPENTELEMETRY_JS_CONTRIB_PATH="$(cd "$CX_CONTRIB_CACHE" && pwd)" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/publish-sandbox.sh` around lines 21 - 37, The script currently always
clones/fetches and checks out CX_CONTRIB_SHA into CX_CONTRIB_CACHE, which
mutates a caller-provided OPENTELEMETRY_JS_CONTRIB_PATH; change the logic so
that if OPENTELEMETRY_JS_CONTRIB_PATH is already set (and non-empty) the script
uses it unchanged and skips cloning/fetching/checking out; only when
OPENTELEMETRY_JS_CONTRIB_PATH is unset should you set CX_CONTRIB_CACHE (from
OPENTELEMETRY_JS_CONTRIB_PATH default), perform mkdir/git clone/git fetch/git -C
... checkout of CX_CONTRIB_REPO at CX_CONTRIB_SHA and then export
OPENTELEMETRY_JS_CONTRIB_PATH="$(cd "$CX_CONTRIB_CACHE" && pwd)". Ensure
references to OPENTELEMETRY_JS_CONTRIB_PATH, CX_CONTRIB_CACHE, CX_CONTRIB_REPO,
CX_CONTRIB_SHA and the git -C ... checkout/fetch commands are used to locate the
code to change.
|
|
||
| This repo is a fork of [`coralogix/opentelemetry-lambda`](https://github.com/coralogix/opentelemetry-lambda) (branch `coralogix-nodejs-autoinstrumentation`), which is itself a fork of [`open-telemetry/opentelemetry-lambda`](https://github.com/open-telemetry/opentelemetry-lambda). | ||
|
|
||
| We also consume [`coralogix/opentelemetry-js-contrib`](https://github.com/coralogix/opentelemetry-js-contrib) (branch `coralogix-autoinstrumentation`) at build time — pinned separately in `scripts/publish-sandbox.sh` and `.github/workflows/publish-extend-otel-layer.yml`. |
There was a problem hiding this comment.
Fix pin-location list to avoid SHA drift.
Line 5 says the cx-contrib pin is tracked in two places, but Lines 63-68 require three (including scripts/build-nodejs.sh). Please make Line 5 consistent to prevent partial updates.
📝 Suggested doc patch
-We also consume [`coralogix/opentelemetry-js-contrib`](https://github.com/coralogix/opentelemetry-js-contrib) (branch `coralogix-autoinstrumentation`) at build time — pinned separately in `scripts/publish-sandbox.sh` and `.github/workflows/publish-extend-otel-layer.yml`.
+We also consume [`coralogix/opentelemetry-js-contrib`](https://github.com/coralogix/opentelemetry-js-contrib) (branch `coralogix-autoinstrumentation`) at build time — pinned separately in `scripts/publish-sandbox.sh`, `scripts/build-nodejs.sh`, and `.github/workflows/publish-extend-otel-layer.yml`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| We also consume [`coralogix/opentelemetry-js-contrib`](https://github.com/coralogix/opentelemetry-js-contrib) (branch `coralogix-autoinstrumentation`) at build time — pinned separately in `scripts/publish-sandbox.sh` and `.github/workflows/publish-extend-otel-layer.yml`. | |
| We also consume [`coralogix/opentelemetry-js-contrib`](https://github.com/coralogix/opentelemetry-js-contrib) (branch `coralogix-autoinstrumentation`) at build time — pinned separately in `scripts/publish-sandbox.sh`, `scripts/build-nodejs.sh`, and `.github/workflows/publish-extend-otel-layer.yml`. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: The official name of this software platform is spelled with a capital “H”.
Context: ...ely in scripts/publish-sandbox.sh and `.github/workflows/publish-extend-otel-layer.yml...
(GITHUB)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@UPSTREAM.md` at line 5, The sentence on Line 5 of UPSTREAM.md incorrectly
lists only two pin locations; update that sentence to state three pin locations
so it matches Lines 63-68—mentioning scripts/publish-sandbox.sh,
.github/workflows/publish-extend-otel-layer.yml, and scripts/build-nodejs.sh
(the `coralogix/opentelemetry-js-contrib` pin tracked at build time) so the doc
consistently reflects all places that must be updated to avoid SHA drift.
- extend/README.md: split consumer-contract table into collector env vars vs per-request OTLP headers; move ARIZE_PROJECT_NAME to headers (no collector default); note ARIZE_S3_BUCKET_NAME applies to s3 variant only - publish-extend-otel-layer.yml: size-check the merged layer.zip against Lambda's 50MB zipped limit after the collector+nodejs merge step - UPSTREAM.md: "three places" -> "four places" to match the four bullets - collector/lambdacomponents/exporter: rename logging.go -> debug.go and update build tag to lambdacomponents.exporter.debug - collector configs: declare tls.insecure: false on otlp/coralogix and otlp/arize exporters to document intent at the security boundary - cx-wrapper + layer package.json: bump exporter-*-otlp-proto, instrumentation-grpc, instrumentation-http to 0.214.0 so the whole @opentelemetry/* experimental cohort is on one version - publish-sandbox.sh: add --description with cx-contrib SHA + build time to aws lambda publish-layer-version for easier sandbox triage Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Summary
coralogix/opentelemetry-lambda(branchcoralogix-nodejs-autoinstrumentation) and ships a single Node.js Lambda layer:extend-nodejs-wrapper-and-exporter-{amd64,arm64}, published to account 159581800400 in us-east-1 + us-west-2 and org-visible to Extend AWS accounts.NodeLambdaBuilder.otelTracingProps; default is cx-only.dotnet/,java/,ruby/,go/,python/) and their CI.python/intentionally removed —origin/python-instrumentationis the starting point if Python autoinstrumentation is ever needed.UPSTREAM.md: fork-points for coralogix/opentelemetry-lambda, coralogix/opentelemetry-js-contrib, and open-telemetry/opentelemetry-lambda (latest absorbed OTel-upstream tag:layer-nodejs/0.10.0→ SHAc9e67c4, via coralogix merge commit436f3d0, 2024-10-28). Includes remote-setup block + manual sync procedure walking all three. Full rationale in the linked Confluence page.cx-contribSHA (3a9691a6…) inscripts/publish-sandbox.sh,scripts/build-nodejs.sh, andpublish-extend-otel-layer.yml— all three must be bumped together.ci-scripts/+dev/intoscripts/. Deletes dead upstream workflows (ci-java,publish-nodejs,release-layer-*,codeql,close-stale), issue templates,CONTRIBUTING.md,RELEASE.md.CODEOWNERSto@helloextend/devopsanddependabot.ymlto the ecosystems we actually ship (github-actions, gomod for collector, npm for nodejs).Test plan
./scripts/publish-sandbox.sh arm64green end-to-end on a clean checkout (no stale.build-cache/) — publishesextend-nodejs-wrapper-and-exporter-sandbox-arm64to engservicessandbox us-east-1.amd64.workflow_dispatchrun ofpublish-extend-otel-layer.ymlcompletes both matrix legs; confirm resulting layer ARNs are org-visible from a consumer account viaaws lambda get-layer-version.extend-cdk-lib/NodeLambdaBuilderwithotelTracingPropsomitted — verify spans land in Coralogix.otelTracingProps.arizeset — verify spans fan out to Coralogix and Arize.otelTracingProps.s3Archival— verify traces written to the configured bucket.make -C collector package-extend GOARCH=arm64builds cleanly on Go 1.25.🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Updates
Removed