Remove drift replacement for off-crest Cavity#550
Conversation
|
I tried implementing a test comparing a 90° |
Its not quite a rollback. In #549 I did not only introduce the drift replacement but also did some minor improvements to the numerical conditioning of We should still properly remove the division by 0 though, either by cherry-picking or waiting for #538. |
|
We decided to cherry-pick the updates of #538 into this PR. We should therefore also consider #538 (comment) here. |
|
@Hespe, maybe you could test just the line computing |
There seems to be some performance impact, even when compared to using
|
|
Hmm ... I'm not happy about this. We definitely cannot find a way of rearranging this equation to avoid the new autograd object and its performance penalty? |
|
Maybe @cr-xu has an idea, but I do not see how one could remove the singularity of |
|
I don't see how this function can be rewritten to get rid of the singularity. My opinion is: to keep it simple, we can also just leave it (just use torch.where() or even remove that) and note in the docstring that the allowed phases are (-90, 90) and one should not set exactly 90 degs phase offset. Even if someone wants exactly 90 deg RF phase offset and still gradient with respect to that, they can add a small eps as that will be changed later by gradient-descent anyway. It doesn't seem to be worth it to sacrafice the speed for all other cavity use cases. Apparently, the special autograd handling for pytorch sinc function is implemented in cpp so it doesn't suffer from the speed so much. (pytorch/pytorch@5854e93). I don't think that is possible for us. |
cr-xu
left a comment
There was a problem hiding this comment.
Looks good to me. If the custom autograd doesn't impact the over all performance too much, it's the proper way to go. 👍



Description
The initial fix for the linked issue did accidentally remove bunch compression effects for off-crest cavities.
Motivation and Context
Types of changes
Checklist
flake8(required).pytesttests pass (required).pyteston a machine with a CUDA GPU and made sure all tests pass (required).Note: We are using a maximum length of 88 characters per line.