Integrating Task-Specific and Universal Adapters for Pre-Trained Model-based Class-Incremental Learning
The code repository for "Integrating Task-Specific and Universal Adapters for Pre-Trained Model-based Class-Incremental Learning" (ICCV 2025) in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:
@inproceedings{wang2025integrating,
title={Integrating Task-Specific and Universal Adapters for Pre-Trained Model-based Class-Incremental Learning},
author={Yan Wang and Da-Wei Zhou and Han-Jia Ye},
booktitle={ICCV},
year={2025}
}
[08/2025] Code has been released.
[08/2025] arXiv paper has been released.
[06/2025] Accepted to ICCV 2025.
Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Existing pre-trained model-based CIL methods often freeze the pre-trained network and adapt to incremental tasks using additional lightweight modules such as adapters. However, incorrect module selection during inference hurts performance, and task-specific modules often overlook shared general knowledge, leading to errors on distinguishing between similar classes across tasks. To address the aforementioned challenges, we propose integrating Task-Specific and Universal Adapters (TUNA) in this paper. Specifically, we train task-specific adapters to capture the most crucial features relevant to their respective tasks and introduce an entropy-based selection mechanism to choose the most suitable adapter. Furthermore, we leverage an adapter fusion strategy to construct a universal adapter, which encodes the most discriminative features shared across tasks. We combine task-specific and universal adapter predictions to harness both specialized and general knowledge during inference. Extensive experiments on various benchmark datasets demonstrate the state-of-the-art performance of our approach.
We provide the processed datasets as follows:
- CIFAR100: will be automatically downloaded by the code.
- ImageNet-R: Google Drive: link or Onedrive: link
- ImageNet-A: Google Drive: link or Onedrive: link
- ObjectNet: Onedrive: link You can also refer to the filelist if the file is too large to download.
You need to modify the path of the datasets in ./utils/data.py
according to your own path.
These datasets are referenced in the Aper
Please follow the settings in the exps
folder to prepare json files, and then run:
python main.py --config ./exps/[filename].json
We would like to express our gratitude to the following repositories for offering valuable components and functions that contributed to our work.