Skip to content

rezwanh001/Multimodal-Depression-Detection

Repository files navigation

Multimodal-Depression-Detection

This repository is the official implementation of the following paper.

Paper Title: MDD-Net: Multimodal Depression Detection through Mutual Transformer
Md Rezwanul Haque, Md. Milon Islam, S M Taslim Uddin Raju, Hamdi Altaheri, Lobna Nassar, Fakhri Karray

Proceedings of the 2025 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Vienna, Austria. Copyright 2025 by the author(s).

arXiv

Abstract

Depression is a major mental health condition that severely impacts the emotional and physical well-being of individuals. The simple nature of data collection from social media platforms has attracted significant interest in properly utilizing this information for mental health research. A Multimodal Depression Detection Network (MDD-Net), utilizing acoustic and visual data obtained from social media networks, is proposed in this work where mutual transformers are exploited to efficiently extract and fuse multimodal features for efficient depression detection. The MDD-Net consists of four core modules: an acoustic feature extraction module for retrieving relevant acoustic attributes, a visual feature extraction module for extracting significant high-level patterns, a mutual transformer for computing the correlations among the generated features and fusing these features from multiple modalities, and a detection layer for detecting depression using the fused feature representations. The extensive experiments are performed using the multimodal D-Vlog dataset, and the findings reveal that the developed multimodal depression detection network surpasses the state-of-the-art by up to 17.37% for F1-Score, demonstrating the greater performance of the proposed system. The source code is accessible at https://github.com/rezwanh001/Multimodal-Depression-Detection.


python implementation


Related resources:

LOCAL ENVIRONMENT

OS          :   Ubuntu 24.04.1 LTS       
Memory      :   128.0 GiB
Processor   :   Intel® Xeon® w5-3425 × 24  
Graphics    :   NVIDIA RTX A6000
CPU(s)      :   24
Gnome       :   46.0 

Prepare Datasets

We use the D-Vlog dataset, proposed in this paper. For the D-Vlog dataset, please fill in the form at the bottom of the dataset website, and send a request email to the author. Following D-Vlog's setup, the dataset is split into train, validation and test sets with a 7:1:2 ratio.


python requirements

  • pip requirements: pip install -r requirements.txt

Execution (Depression Detection)

  • $ conda activate your_env

  • To train and validate:

    $ python mainkfold.py

  • To inference: $ python infer_mainkfold.py

📖 Citation:

  • If you find this project useful for your research, please cite this paper
@article{haque2025mdd,
  title={MDD-Net: Multimodal Depression Detection through Mutual Transformer},
  author={Haque, Md Rezwanul and Islam, Md Milon and Raju, SM and Altaheri, Hamdi and Nassar, Lobna and Karray, Fakhri},
  journal={arXiv preprint arXiv:2508.08093},
  year={2025}
}

🙌🏻 Acknowledgement

License

This project is licensed under the MIT License.

License: MIT

About

Multimodal Depression Detection through Mutual Transformer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages