Skip to content

JHU-LCAP/BoostingMSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Boosting Modality Representation with Pre-trained Models and Multi-task Training for Multimodal Sentiment Analysis

BoostingMSA: Official PyTorch Implementation for the paper: Boosting Modality Representation with Pre-trained Models and Multi-task Training for Multimodal Sentiment Analysis


BoostingMSA (💻WIP)

TODO List

  • Update code
  • Update tutorial

References

If you find the code useful for your research, please consider citing:

@inproceedings{hai2023boosting,
  title={Boosting Modality Representation with Pre-trained Models and Multi-task Training for Multimodal Sentiment Analysis},
  author={Hai, Jiarui and Liu, Yu-jeh and Elhilali, Mounya},
  booktitle={2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
  year={2023},
  organization={IEEE}
}

We built this repo based on:

@inproceedings{liu2022make,
  title={Make Acoustic and Visual Cues Matter: CH-SIMS v2. 0 Dataset and AV-Mixup Consistent Module},
  author={Liu, Yihe and Yuan, Ziqi and Mao, Huisheng and Liang, Zhiyun and Yang, Wanqiuyue and Qiu, Yuanzhe and Cheng, Tie and Li, Xiaoteng and Xu, Hua and Gao, Kai},
  booktitle={Proceedings of the 2022 International Conference on Multimodal Interaction},
  pages={247--258},
  year={2022}
}

Acknowledgement

We borrow code from following repos:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •