|
1 | 1 | # Changelog |
2 | 2 |
|
| 3 | +## v1.3 |
| 4 | + |
| 5 | +### New features |
| 6 | +* Implement the `PRVAccountant` based on the paper [Numerical Composition of Differential Privacy](https://arxiv.org/abs/2106.02848) (#493) |
| 7 | +* Support `nn.EmbeddingBag` (#519) |
| 8 | + |
| 9 | +### Bug fixes |
| 10 | +* Fix benchmarks (#503, #507, #508) |
| 11 | +* Align `make_private_with_epsilon` with `make_private` (#509, #526) |
| 12 | +* Test fixes (#513, #515, #527, #533) |
| 13 | +* Summed discriminator losses to perform one backprop step (#474) |
| 14 | +* Fixed issue with missing argument in MNIST example (#520) |
| 15 | +* Functorch gradients: investigation and fix (#510) |
| 16 | +* Support empty batches (#530) |
| 17 | + |
3 | 18 | ## v1.2 |
4 | 19 |
|
5 | 20 | ### New ways to compute per sample gradients |
6 | | -We're glad to present Opacus v1.2, which contains some major updates to per sample gradient computation mechanisms |
| 21 | +We're glad to present Opacus v1.2, which contains some major updates to per sample gradient computation mechanisms |
7 | 22 | and includes all the good stuff from the recent PyTorch releases. |
8 | 23 | * Functorch - per sample gradients for all |
9 | 24 | * ExpandedWeights - yet another way to compute per sample gradients |
10 | | -* See [Release notes](https://github.com/pytorch/opacus/releases/tag/v1.2.0) |
11 | | - and [GradSampleModule README](https://github.com/pytorch/opacus/blob/main/opacus/grad_sample/README.md) |
| 25 | +* See [Release notes](https://github.com/pytorch/opacus/releases/tag/v1.2.0) |
| 26 | + and [GradSampleModule README](https://github.com/pytorch/opacus/blob/main/opacus/grad_sample/README.md) |
12 | 27 | for detailed feature explanation |
13 | | - |
| 28 | + |
14 | 29 | ### Other improvements |
15 | 30 | * Fix `utils.unfold2d` with non-symmetric pad/dilation/kernel_size/stride (#443) |
16 | 31 | * Add support for "same" and "valid" padding for hooks-based grad sampler for convolution layers |
|
0 commit comments