Skip to content

Conversation

@vadimalekseev
Copy link
Member

@vadimalekseev vadimalekseev commented Sep 29, 2025

Describe Your Changes

TODO:

  • Properly encode nested arrays
  • Tests

Also added more test cases compared to master to cover all decoding cases.

Also fixed a case that behaved differently compared to the master branch. See this new test case:

// decode bytes of ArrayValue
data = `[{"scopeLogs":[{"logRecords":[{"timeUnixNano":1234,"body":{"arrayValue":{"values":[{"bytesValue":"Zm9vIGJhcg=="}]}}}]}]}]`
timestampsExpected = []int64{1234}
resultsExpected = `{"_msg":"[\"Zm9vIGJhcg==\"]","severity":"Unspecified"}`
f(data, timestampsExpected, resultsExpected)
- on the master branch it encodes as "[Zm9vIGJhcg==]" which is incorrect because /insert/jsonline encodes this JSON as "[\"Zm9vIGJhcg==\"]"

Fixes #689

                                                         │  /tmp/old.txt  │            /tmp/new.txt             │
                                                         │     sec/op     │   sec/op     vs base                │
ParseProtobufRequest/scopes_1/rows_100/attributes_5-14     19191.5n ±  2%   520.4n ± 2%  -97.29% (p=0.000 n=20)
ParseProtobufRequest/scopes_1/rows_100/attributes_10-14    20428.0n ±  2%   548.9n ± 3%  -97.31% (p=0.000 n=20)
ParseProtobufRequest/scopes_1/rows_1000/attributes_5-14     92.719µ ±  0%   4.511µ ± 3%  -95.13% (p=0.000 n=20)
ParseProtobufRequest/scopes_1/rows_1000/attributes_10-14    92.803µ ±  0%   4.480µ ± 3%  -95.17% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_100/attributes_5-14      35.788µ ± 11%   1.025µ ± 3%  -97.14% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_100/attributes_10-14     36.434µ ±  1%   1.094µ ± 3%  -97.00% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_1000/attributes_5-14    165.066µ ±  0%   8.878µ ± 4%  -94.62% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_1000/attributes_10-14   165.714µ ±  0%   8.917µ ± 3%  -94.62% (p=0.000 n=20)
geomean                                                      57.55µ         2.181µ       -96.21%

                                                         │ /tmp/old.txt  │               /tmp/new.txt               │
                                                         │      B/s      │      B/s        vs base                  │
ParseProtobufRequest/scopes_1/rows_100/attributes_5-14     4.969Mi ±  2%   183.253Mi ± 2%  +3588.20% (p=0.000 n=20)
ParseProtobufRequest/scopes_1/rows_100/attributes_10-14    4.668Mi ±  2%   173.745Mi ± 3%  +3621.86% (p=0.000 n=20)
ParseProtobufRequest/scopes_1/rows_1000/attributes_5-14    10.29Mi ±  0%    211.42Mi ± 3%  +1955.49% (p=0.000 n=20)
ParseProtobufRequest/scopes_1/rows_1000/attributes_10-14   10.28Mi ±  0%    212.86Mi ± 3%  +1971.46% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_100/attributes_5-14     5.326Mi ± 10%   186.119Mi ± 3%  +3394.36% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_100/attributes_10-14    5.236Mi ±  1%   174.408Mi ± 3%  +3231.15% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_1000/attributes_5-14    11.55Mi ±  0%    214.84Mi ± 4%  +1759.51% (p=0.000 n=20)
ParseProtobufRequest/scopes_2/rows_1000/attributes_10-14   11.51Mi ±  0%    213.91Mi ± 3%  +1758.33% (p=0.000 n=20)
geomean                                                    7.410Mi           195.5Mi       +2538.80%

Checklist

The following checks are mandatory:

@vadimalekseev vadimalekseev marked this pull request as draft September 29, 2025 17:13
@vadimalekseev vadimalekseev force-pushed the optimize-otel branch 4 times, most recently from 6ecf185 to 3ae813d Compare September 30, 2025 12:05
@vadimalekseev vadimalekseev force-pushed the optimize-otel branch 2 times, most recently from 80bb366 to e558503 Compare November 24, 2025 09:23
@vadimalekseev vadimalekseev marked this pull request as ready for review November 24, 2025 09:33
@vadimalekseev vadimalekseev force-pushed the optimize-otel branch 2 times, most recently from c54e130 to dcc053e Compare November 24, 2025 09:43
@valyala valyala merged commit 4ffb744 into master Dec 3, 2025
3 checks passed
@valyala valyala deleted the optimize-otel branch December 3, 2025 23:41
@valyala
Copy link
Contributor

valyala commented Dec 3, 2025

@vadimalekseev , thank you for the great optimization of OpenTelemetry data ingestion performance at VictoriaLogs!

valyala added a commit to VictoriaMetrics/VictoriaMetrics that referenced this pull request Dec 3, 2025
valyala added a commit to VictoriaMetrics/VictoriaMetrics that referenced this pull request Dec 3, 2025
@valyala
Copy link
Contributor

valyala commented Dec 5, 2025

FYI, this pull request has been included in VictoriaLogs v1.40.0 release.

valyala added a commit to VictoriaMetrics/VictoriaMetrics that referenced this pull request Dec 10, 2025
…rsing of samples send via OpenTelemetry protocol

This increases the parser performance by 4x-6x.

This commit uses the technique similar to VictoriaMetrics/VictoriaLogs#720

goos: linux
goarch: amd64
pkg: github.com/VictoriaMetrics/VictoriaMetrics/lib/protoparser/opentelemetry/stream
cpu: AMD Ryzen 7 PRO 5850U with Radeon Graphics
                                                    │   old.txt    │               new.txt               │
                                                    │    sec/op    │   sec/op     vs base                │
ParseStream/default-metrics-labels-formatting-16      15.565µ ± 1%   2.150µ ± 3%  -86.19% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   24.228µ ± 2%   4.355µ ± 1%  -82.02% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          23.028µ ± 2%   3.395µ ± 1%  -85.26% (p=0.000 n=10)
geomean                                                20.55µ        3.168µ       -84.59%

                                                    │   old.txt    │                new.txt                 │
                                                    │     B/s      │      B/s       vs base                 │
ParseStream/default-metrics-labels-formatting-16      127.9Mi ± 1%    918.3Mi ± 3%  +617.82% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   82.19Mi ± 2%   453.32Mi ± 1%  +451.57% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          86.47Mi ± 2%   581.56Mi ± 1%  +572.52% (p=0.000 n=10)
geomean                                               96.88Mi         623.3Mi       +543.34%

                                                    │   old.txt    │                 new.txt                  │
                                                    │     B/op     │    B/op      vs base                     │
ParseStream/default-metrics-labels-formatting-16      12.53Ki ± 0%   0.00Ki ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   21.15Ki ± 1%   0.00Ki ±  ?  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          20.74Ki ± 1%   0.00Ki ±  ?  -100.00% (p=0.000 n=10)
geomean                                               17.65Ki                     ?                       ¹ ²
¹ summaries must be >0 to compute geomean
² ratios must be >0 to compute geomean

                                                    │  old.txt   │                new.txt                 │
                                                    │ allocs/op  │ allocs/op  vs base                     │
ParseStream/default-metrics-labels-formatting-16      426.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-labels-formatting-16   514.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
ParseStream/prometheus-metrics-formatting-16          514.0 ± 0%    0.0 ± 0%  -100.00% (p=0.000 n=10)
geomean                                               482.8                   ?                       ¹ ²
jiekun added a commit to VictoriaMetrics/VictoriaTraces that referenced this pull request Jan 7, 2026
Porting VictoriaMetrics/VictoriaLogs#720 to VictoriaTraces, to optimize the OTLP protobuf data ingestion performance.

This optimization applies to OTLP protobuf data parsing, converting the input []byte directly into log fields and eliminating the intermediate step of temporarily constructing OTLP data models.

Note: The OTLPHTTP/JSON data ingestion will remain not affected.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

app/vlinsert: optimize OpenTelemetry log processing

3 participants