Skip to content

Remove drift replacement for off-crest Cavity#550

Merged
jank324 merged 16 commits intodesy-ml:masterfrom
Hespe:fix-cavity-compression
Sep 17, 2025
Merged

Remove drift replacement for off-crest Cavity#550
jank324 merged 16 commits intodesy-ml:masterfrom
Hespe:fix-cavity-compression

Conversation

@Hespe
Copy link
Copy Markdown
Member

@Hespe Hespe commented Sep 3, 2025

Description

The initial fix for the linked issue did accidentally remove bunch compression effects for off-crest cavities.

Motivation and Context

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist

  • I have updated the changelog accordingly (required).
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
  • I have reformatted the code and checked that formatting passes (required).
  • I have have fixed all issues found by flake8 (required).
  • I have ensured that all pytest tests pass (required).
  • I have run pytest on a machine with a CUDA GPU and made sure all tests pass (required).
  • I have checked that the documentation builds (required).

Note: We are using a maximum length of 88 characters per line.

@Hespe Hespe self-assigned this Sep 3, 2025
@Hespe Hespe added the bug Something isn't working label Sep 3, 2025
@Hespe Hespe requested a review from cr-xu September 3, 2025 07:02
@Hespe
Copy link
Copy Markdown
Member Author

Hespe commented Sep 3, 2025

I tried implementing a test comparing a 90° Cavity to Ocelot, but according to a comment in the code we implement Cavity without acceleration slightly different than Ocelot so that did not work out.

@cr-xu
Copy link
Copy Markdown
Member

cr-xu commented Sep 3, 2025

So this basically rolls back the #549, and the fix is expected to be introduced in #538, right?

The changelog seems incorrect as the division by zero is not removed in this PR.

@Hespe
Copy link
Copy Markdown
Member Author

Hespe commented Sep 4, 2025

So this basically rolls back the #549, and the fix is expected to be introduced in #538, right?

The changelog seems incorrect as the division by zero is not removed in this PR.

Its not quite a rollback. In #549 I did not only introduce the drift replacement but also did some minor improvements to the numerical conditioning of Cavity. It does not produce the NaN anymore (although the division by 0 is technically still there).

We should still properly remove the division by 0 though, either by cherry-picking or waiting for #538.

@jank324 jank324 mentioned this pull request Sep 5, 2025
14 tasks
@Hespe
Copy link
Copy Markdown
Member Author

Hespe commented Sep 5, 2025

We decided to cherry-pick the updates of #538 into this PR. We should therefore also consider #538 (comment) here.

@Hespe Hespe marked this pull request as draft September 5, 2025 14:39
@Hespe Hespe marked this pull request as ready for review September 8, 2025 09:31
@Hespe Hespe requested a review from jank324 September 8, 2025 09:32
Copy link
Copy Markdown
Member

@jank324 jank324 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Hespe, did you check the impact of this on speed?

@jank324
Copy link
Copy Markdown
Member

jank324 commented Sep 8, 2025

I just ran this script on it, which seems to suggest that there is no meaningful impact on speed.

benchmark_commits_plot

@jank324
Copy link
Copy Markdown
Member

jank324 commented Sep 8, 2025

@Hespe, maybe you could test just the line computing alpha or something similarly small with timeit to see on a very granular level what the impact is there?

@Hespe
Copy link
Copy Markdown
Member Author

Hespe commented Sep 8, 2025

@Hespe, maybe you could test just the line computing alpha or something similarly small with timeit to see on a very granular level what the impact is there?

There seems to be some performance impact, even when compared to using torch.where to get rid of the singularity at 0. But that might just be the cost of getting proper gradients in that case.

grafik grafik

@jank324
Copy link
Copy Markdown
Member

jank324 commented Sep 8, 2025

Hmm ... I'm not happy about this. We definitely cannot find a way of rearranging this equation to avoid the new autograd object and its performance penalty?

@Hespe
Copy link
Copy Markdown
Member Author

Hespe commented Sep 8, 2025

Maybe @cr-xu has an idea, but I do not see how one could remove the singularity of $\frac{\log(1+x)}{x}$ without some kind of case distinction. And a custom autograd function seems to be the way of doing that in Torch without losing gradients.

@cr-xu
Copy link
Copy Markdown
Member

cr-xu commented Sep 9, 2025

I don't see how this function can be rewritten to get rid of the singularity.

My opinion is: to keep it simple, we can also just leave it (just use torch.where() or even remove that) and note in the docstring that the allowed phases are (-90, 90) and one should not set exactly 90 degs phase offset.

Even if someone wants exactly 90 deg RF phase offset and still gradient with respect to that, they can add a small eps as that will be changed later by gradient-descent anyway. It doesn't seem to be worth it to sacrafice the speed for all other cavity use cases.


Apparently, the special autograd handling for pytorch sinc function is implemented in cpp so it doesn't suffer from the speed so much. (pytorch/pytorch@5854e93). I don't think that is possible for us.
Maybe we can try to open issue in pytorch and convince them to include in the main pytorch branch, but that probably takes longer and there's no promise that it will happen.

Comment thread cheetah/utils/autograd.py Outdated
Copy link
Copy Markdown
Member

@cr-xu cr-xu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. If the custom autograd doesn't impact the over all performance too much, it's the proper way to go. 👍

@jank324 jank324 merged commit 3295b9c into desy-ml:master Sep 17, 2025
15 of 17 checks passed
@jank324 jank324 mentioned this pull request Sep 19, 2025
14 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants