This is the official PyTorch codes for the paper.
Real-world Image Dehazing with Coherence-based Pseudo Labeling and Cooperative Unfolding Network
Chengyu Fang, Chunming He, Fengyang Xiao, Yulun Zhang, Longxiang Tang, Yuelin Zhang, Kai Li, and Xiu Li
Advances in Neural Information Processing Systems 2024
- 2025-01-17: We upload a simple example to pretrain and fine-tune with our models. π
- 2024-11-06: We fix some bugs in the code and support the single GPU training now. π
- 2024-10-26: Our results and pre-trained weights have been released! β€οΈ
- 2024-10-23: We are processing the camera-ready version of this paper, the pretrained weights and test results will be released soon.
- 2024-09-26: This paper has been accepted by NeurIPS 2024 as a Spotlight Paper. π Thanks all the participants, reviewers, chairs and committee. We will release the code soon.
We provide two types of dataset loading functions for model training: 1. loads clean images and corresponding depth maps to generate hazy images using the RIDCP Online Haze Generation Pipeline, 2. directly loads paired clean and degraded images. You can choose the appropriate method based on your dataset and task.
We support loading the depth map from .npy (used by RICDP500) and .mat files (used by OTS/ITS). You can also use depth estimation methods like Depth Anything or RA-Depth to construct the depth maps for your own dataset and save as .npy files.
To train or fine-tune our CORUN or any other Image Dehazing methods by online haze generation. Please refer to HERE
To train or fine-tune any Image-to-Image based Image Restoration methods (also can be used for the image dehazing task and our proposed CORUN). Please refer to HERE.
- RTTS dataset can be downloaded from Dropbox.
- URHI dataset can be downloaded from Dropbox.
- Duplicate Removed URHI can be downloaded from Google Drive
- RIDCP500 can be downloaded from RIDCP's Repo
Download the pre-trained da-clip weights and place it in ./pretrained_weights/. You can download the daclip weights we used from Google Drive. You can also choose other type of clip models and corresponding weights from openclip, if you do this, don't forget to modify your options.
git clone https://github.com/cnyvfang/CORUN-Colabator.git
conda create -n corun_colabator python=3.9
conda activate corun_colabator
# If necessary, Replace pytorch-cuda=? with the compatible version of your GPU driver.
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia
cd basicsr_modified
pip install tb-nightly -i https://mirrors.aliyun.com/pypi/simple # Run this line if in Chinese Mainland
pip install -r requirements.txt
python setup.py develop
cd ..pip install -r requirements.txt
python setup.py develop
python init_modules.pyπ© If you are in China Mainland, run the script before init_modules.py to speed up the download of the pre-trained models:
export HF_ENDPOINT=https://hf-mirror.comIf you want to use other network to replace our CORUN, you only need to add your network to archs, replace the network definition in option files and run the script. If you need to define your own loss function and training strategies, you need to modify the corresponding models and losses before you invoke it in option files.
This step can be skipped IF YOU DO NOT USE OUR CORUN, and have already well-trained your model in your framework.
# Multi-GPU
sh dehazing_options/train_corun_by_depth.sh
# Single-GPU (Not recommended)
sh dehazing_options/train_corun_by_depth_single_gpu.sh
Please do not forget to set and load the pre-trained weights of the first stage in option file.
# Multi-GPU
sh dehazing_options/train_corun_with_colabator_by_depth.sh
# Single-GPU (Not recommended)
sh dehazing_options/train_corun_with_colabator_by_depth_single_gpu.shIf you want to use other network to replace Restormer, you only need to add your network to archs, replace the network definition in option files and run the script. If you need to define your own loss function and training strategies, you need to modify the corresponding models and losses before you invoke it in option files.
This step can be skipped if you have already well-trained your model in your framework.
# Multi-GPU
sh image_restoration_options/train_stage1_restormer.sh
# Single-GPU
sh image_restoration_options/train_stage1_restormer_single_gpu.sh
Please do not forget to set and load the pre-trained weights of the first stage in option file.
# Multi-GPU
sh image_restoration_options/train_stage2_restormer_with_colabator.sh
# Single-GPU
sh image_restoration_options/train_stage2_restormer_with_colabator_single_gpu.shDownload the pre-trained CORUN weight and place it in ./pretrained_weights/. You can download the CORUN+ weight from Google Drive. To quickly use the results of our experiments without manual inference or retraining, you can download all results dehazed/restored by our model from Google Drive.
CUDA_VISIBLE_DEVICES=0 sh dehazing_options/valid.corun.sh
# OR
CUDA_VISIBLE_DEVICES=0 python3 corun_colabator/simple_test.py \
--opt dehazing_options/valid_corun.yml \
--input_dir /path/to/testset/images \
--result_dir ./results/CORUN \
--weights ./pretrained_weights/CORUN.pth \
--dataset RTTSCaculate the NIMA and BRISQUE results.
CUDA_VISIBLE_DEVICES=0 python evaluate.py --input_dir /path/to/resultsWe achieved state-of-the-art performance on RTTS and Fattal's datasets and corresponding downstream tasks. More results can be found in the paper. To quickly use the results of our experiments without manual inference or retraining, you can download all results dehazed/restored by our model from Google Drive.
Quantitative Comparison (click to expand)
Visual Comparison (click to expand)
If you find the code helpful in your research or work, please cite the following paper(s).
@article{fang2024real,
title={Real-world image dehazing with coherence-based pseudo labeling and cooperative unfolding network},
author={Fang, Chengyu and He, Chunming and Xiao, Fengyang and Zhang, Yulun and Tang, Longxiang and Zhang, Yuelin and Li, Kai and Li, Xiu},
journal={Advances in Neural Information Processing Systems},
volume={37},
pages={97859--97883},
year={2024}
}
The codes are based on BasicSR. Please also follow their licenses. Thanks for their awesome works.








