Skip to content
This repository was archived by the owner on Aug 30, 2023. It is now read-only.

Conversation

@szokeasaurusrex
Copy link
Member

This PR contains the code we used to measure the performance overhead for the Python profiler. We have opened this PR to document the how to reproduce our benchmark test; we intend to keep it as a draft PR, as it does not need to get merged into the main branch.

In order to reproduce the trial we conducted, clone the repository, and check out this pull request. Then, follow the instructions here to run the benchmark. Please note: the benchmark does not run on the M1 Mac; we performed the benchmarking on an Ubuntu 22.04 Google Cloud VM.

The metric we used to measure overhead was the percent difference between the p90 latency for the server instrumented with the profiler and Sentry compared with the server instrumented with sentry but not the profiler. To calculate this difference, open the report.html file that is generated when the benchmark finishes. Scroll down to the table which displays the server latencies. Calculate the percent difference as follows:

percent_difference = (instrumented_nylas_p90 - instrumented_p90) / instrumented_p90 * 100%

In the above equation, instrumented_nylas_p90_latency and instrumented_p90_latency are the 90th percentile latencies for the instrumented_nylas (profiled) and instrumented (non-profiled) runs, respectively.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant