Skip to content

[ICCV 2025 Highlight] Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

Notifications You must be signed in to change notification settings

Jacky1128/Rep-MTL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

arXiv Project Page HuggingFace Daily Top 5

Zedong Wang1, Siyuan Li2, Dan Xu1

1The Hong Kong University of Science and Technology, 2Zhejiang University



Rep-MTL Method Overview

(a) Most existing MTL optimization methods focus on addressing conflicts in parameter updates. (b) Rep-MTL instead leverages task saliency in shared representation space to explicitly facilitate cross-task information sharing while preserving task-specific signals via regularization, without modifications to either the underlying optimizers or model architectures.

Overview

Rep-MTL is a representation-level regularization method for multi-task learning that introduces task saliency-based objectives to encourage inter-task complementarity via Cross-task Saliency Alignment (CSA) while mitigating negative transfer among tasks via Task-specific Saliency Regulation (TSR).

Benchmarks

We evaluate Rep-MTL on several challenging MTL benchmarks spanning diverse computer vision scenarios:

Dataset Tasks Scenario Download
NYUv2 SemSeg + Depth Est. + Surface Normal Pred. Indoor Dense Prediction Link
CityScapes SemSeg + Depth Est. Outdoor Dense Prediction Link
Office-31 Image Classification (31 classes) Domain Adaptation Link
Office-Home Image Classification (65 classes) Domain Adaptation Link

Updates

  • [July 24, 2025] 🎉 Rep-MTL was selected as ICCV 2025 Highlight! We are working on cleaning and organizing our codebase. Stay tuned!
  • [June 26, 2025] Rep-MTL was accepted to ICCV 2025, with final ratings: 5/5/6 (out of 6).

Contact

For questions or research discussions, please contact Zedong Wang at [email protected].

BibTeX

@inproceedings{iccv2025repmtl,
  title={Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning},
  author={Wang, Zedong and Li, Siyuan and Xu, Dan},
  booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}

Acknowledgements

We thank the following great repositories that facilitated our research: LibMTL, CAGrad, MTAN, FAMO, and Nash-MTL. We also extend our appreciation to many other studies in the community for their foundational contributions that inspired this work.

About

[ICCV 2025 Highlight] Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published