Skip to content

Fix errorbar in benchmarks_visualizer.py #711

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

Stonepia
Copy link

Summary

The default errorbar is None, which would lead to "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'".

We don't override the errorbar and let seaborn decide the width.

When running the benchmark_visualizer.py from the tutorial, I got the following error message:

python benchmarks_visualizer.py --kernel-name rope --metric-name memory --kernel-operation-mode full  
Traceback (most recent call last):
  File "/home/usr/folder/ptenv/Liger-Kernel/benchmark/benchmarks_visualizer.py", line 164, in <module>
    main()
  File "/home/usr/folder/ptenv/Liger-Kernel/benchmark/benchmarks_visualizer.py", line 160, in main
    plot_data(df, config)
  File "/home/usr/folder/ptenv/Liger-Kernel/benchmark/benchmarks_visualizer.py", line 112, in plot_data
    ax = sns.lineplot(
         ^^^^^^^^^^^^^
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/seaborn/relational.py", line 515, in lineplot
    p.plot(ax, kwargs)
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/seaborn/relational.py", line 295, in plot
    grouped
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/pandas/core/groupby/groupby.py", line 1819, in apply
    return self._python_apply_general(f, self._obj_with_exclusions)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/pandas/core/groupby/groupby.py", line 1885, in _python_apply_general
    values, mutated = self._grouper.apply_groupwise(f, data, self.axis)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/pandas/core/groupby/ops.py", line 919, in apply_groupwise
    res = f(group)
          ^^^^^^^^
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/pandas/core/groupby/groupby.py", line 1809, in f
    return func(g, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/seaborn/_statistics.py", line 518, in __call__
    err_min, err_max = _percentile_interval(boots, self.error_level)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/usr/miniforge3/envs/ptenv/lib/python3.12/site-packages/seaborn/_statistics.py", line 665, in _percentile_interval
    edge = (100 - width) / 2
            ~~~~^~~~~~~
TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'

Environment

python version: 3.12.10
pandas: 2.2.3
seaborn: 0.13.2
Liger Kernel: 34bbeb6 (May 12)

Testing Done

  • Hardware Type:
  • run make test to ensure correctness
  • run make checkstyle to ensure code style
  • run make test-convergence to ensure convergence

The default errorbar is None, which would lead to "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'".

We don't override the errorbar and let seaborn decide the width.
@Tcc0403
Copy link
Collaborator

Tcc0403 commented May 16, 2025

Thank you! However, for rope memory benchmark, there shouldn't exist more than one y_value on any x_values, so the plot it draws is actually incorrect.

It is because there are different extra_benchmark_config in our all_benchmark_data.csv, but the filtering logic here is insufficient:

df["extra_benchmark_config"] = df["extra_benchmark_config_str"].apply(json.loads)
filtered_df = df[
(df["kernel_name"] == config.kernel_name)
& (df["metric_name"] == config.metric_name)
& (df["kernel_operation_mode"] == config.kernel_operation_mode)
# Use this to filter by extra benchmark configuration property
# & (data['extra_benchmark_config'].apply(lambda x: x.get('H') == 4096))
# FIXME: maybe add a way to filter using some configuration, except of hardcoding it
]

I'll open an issue for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants