@@ -21,24 +21,39 @@ changelog <https://keepachangelog.com/en/1.1.0/>`_ format. This project follows
21
21
.. Removed
22
22
.. #######
23
23
24
- Unreleased
25
- ----------
24
+ Version 2025.9 - 2025-08-18
25
+ ---------------------------
26
26
27
27
Added
28
28
#####
29
29
30
- - Additional logs to the checkpoints, model and the output dirs at the end of training
31
- - When downloading checkpoints and models from Hugging Face, the files will be cached
32
- locally and re-used.
30
+ - Use the best model instead of the latest model for evaluation at the end of training.
31
+ - Log the best epoch when loading checkpoints.
32
+ - Allow changing the scheduler factor in PET.
33
+ - Introduce checkpoint versioning and updating.
34
+ - Added CI tests on GPU.
35
+ - Log the number of model parameters before training starts.
36
+ - Add additional logs to the checkpoints, model, and output directories at the end of
37
+ training.
38
+ - Cache files locally and re-use them when downloading checkpoints and models from
39
+ Hugging Face.
33
40
- ``extra_data `` is now a valid section in the ``options.yaml `` file, allowing users to
34
- add custom data to the training set. The data is contained in the dataloader and can
35
- be used in custom loss functions or models.
36
- - ``mtt eval `` can be used to evaluate models on a ``DiskDataset ``.
41
+ add custom data to the training set. The data is included in the dataloader and can be
42
+ used in custom loss functions or models.
43
+ - ``mtt eval `` can now evaluate models on a ``DiskDataset ``.
44
+
45
+ Changed
46
+ #######
47
+
48
+ - Updated to a new general composition model.
49
+ - Updated to a new implementation of LLPR.
37
50
38
51
Fixed
39
52
#####
40
53
41
- - Log is shown when training with ``restart="auto" ``
54
+ - Fixed ``device `` and ``dtype `` not being set during LoRA fine-tuning in PET.
55
+ - Log messages are now shown when training with ``restart="auto" ``.
56
+ - Fixed incorrect sub-section naming in the Wandb logger.
42
57
43
58
Version 2025.8 - 2025-06-11
44
59
---------------------------
0 commit comments