Skip to content

jianghongcheng/uTDSP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

16 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

uTDSP

Transformer-based Diffusion and Spectral Priors Model For Hyperspectral Pansharpening

๐Ÿ“„ [Paper Link (IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2025)]

Hyperspectral Pansharpening with Transformer-based Spectral Diffusion Priors

๐Ÿ“„ [Paper Link (The IEEE/CVF Winter Conference on Applications of Computer Vision 2025)]

Authors:
Hongcheng Jiang
Zhiqiang Chen



๐Ÿ” Overview

The goal is to reconstruct a high-resolution hyperspectral image (HR-HSI) by fusing a low-resolution hyperspectral image (LR-HSI) and a high-resolution panchromatic image (HR-PCI). Unlike conventional methods that rely on paired HR-HSI ground truth, uTDSP is entirely unsupervised, leveraging spectral priors and a transformer-based diffusion model to guide the reconstruction process.


๐Ÿง  Key Features

  • ๐ŸŽฏ Unsupervised Learning: Learns directly from LR-HSI and HR-PCI without requiring any ground-truth HR-HSI.
  • ๐ŸŒ€ Spectral Diffusion Process: Incorporates a transformer-based denoiser within a diffusion framework.
  • ๐Ÿงฉ Spectral Prior Integration: Enforces spectral consistency using priors extracted from the LR-HSI.
  • โš–๏ธ Adaptive Loss Balancing: Combines spectral fidelity loss and diffusion consistency for robust reconstruction.
  • ๐Ÿ† SOTA Results: Achieves superior performance across multiple benchmark datasets.

๐Ÿ“ˆ Performance Gains

uTDSP achieves consistent PSNR improvements across diverse airborne and satellite datasets, outperforming both supervised and unsupervised baselines.

โœ… Airborne Datasets

  • Chikusei

    • uTDSP: 26.86 dB
    • Best prior method (DDLPS*): 26.85 dB
    • ๐Ÿ”บ +0.01 dB (+0.04%)
  • Indian Pines

    • uTDSP: 25.79 dB
    • Best prior method (DIP-HyperKite): 25.15 dB
    • ๐Ÿ”บ +0.64 dB (+2.54%)
  • PaviaC

    • uTDSP: 28.53 dB
    • Best prior method (DDLPS*): 27.52 dB
    • ๐Ÿ”บ +1.01 dB (+3.67%)
  • PaviaU

    • uTDSP: 30.68 dB
    • Best prior method (GPPNN): 29.86 dB
    • ๐Ÿ”บ +0.82 dB (+2.75%)

โœ… Satellite Datasets

  • Botswana

    • uTDSP: 31.61 dB
    • Best prior method (DIP-HyperKite): 30.24 dB
    • ๐Ÿ”บ +1.37 dB (+4.53%)
  • ZY1-02D

    • uTDSP: 31.22 dB
    • Best prior method (SFIM*): 28.23 dB
    • ๐Ÿ”บ +2.99 dB (+10.59%)

๐Ÿ“Œ Summary

uTDSP consistently improves PSNR by:

  • +0.01โ€“1.01 dB (up to +3.7%) on airborne datasets
  • +1.37โ€“2.99 dB (up to +10.6%) on satellite datasets

These results confirm uTDSPโ€™s strong generalization and superior reconstruction quality across sensing platformsโ€”all achieved without supervision.

๐Ÿ”„ Diffusion Process Illustration


๐Ÿง  Network Architecture

๐Ÿ“Š Band-wise PSNR Comparison

The following plots illustrate the PSNR values for each spectral band across six benchmark datasets, highlighting the spectral fidelity of uTDSP compared to existing methods.

๐Ÿ›ฐ๏ธ Botswana

๐Ÿ›ฐ๏ธ Chikusei

๐Ÿ›ฐ๏ธ Pavia Center

๐Ÿ›ฐ๏ธ Pavia University

๐Ÿ›ฐ๏ธ Indian Pines

๐Ÿ›ฐ๏ธ Ziyuan-1


๐Ÿ–ผ๏ธ Visual Results

๐Ÿ“ท Airborne datasets

๐Ÿ“ท Satellite Datasets


๐Ÿ“Š Quantitative Results on Airborne Datasets

Metrics: PSNR โ†‘ (higher is better), SAM โ†“, ERGAS โ†“
(
denotes an unsupervised method)*


๐Ÿ›ฐ๏ธ Chikusei

Method PSNR SAM ERGAS
DBDENet 25.02 6.5243 4.2316
DHP-DARN 25.24 6.0044 3.9208
DIP-HyperKite 25.63 5.4180 3.7059
DMLD-Net 25.28 6.9856 4.1170
GPPNN 25.17 6.5704 4.1423
HyperPNN 25.34 5.7174 3.8096
DDLPS* 26.85 5.3557 3.7616
GSA* 24.21 6.2670 5.3903
Indusion* 22.29 5.4171 5.0367
PLRDiff* 26.18 6.3831 3.5699
SFIM* 25.54 5.4171 4.0868
uTDSP* 26.86 5.9152 3.3178

๐Ÿ›ฐ๏ธ Indian Pines

Method PSNR SAM ERGAS
DBDENet 23.66 3.4962 1.6389
DHP-DARN 23.97 3.6511 2.1060
DIP-HyperKite 25.15 3.6619 1.4592
DMLD-Net 24.03 3.9927 1.7721
GPPNN 24.80 4.4834 1.5553
HyperPNN 24.60 3.8811 1.5532
DDLPS* 17.72 4.5282 10.0461
GSA* 20.44 4.0885 5.0318
Indusion* 4.83 3.8504 14.8366
PLRDiff* 8.10 10.6969 19.8362
SFIM* 24.42 3.8504 1.5483
uTDSP* 25.79 3.5621 1.2763

๐Ÿ›ฐ๏ธ PaviaC

Method PSNR SAM ERGAS
DBDENet 23.63 18.5981 6.7944
DHP-DARN 26.70 12.4018 4.7917
DIP-HyperKite 26.48 12.5846 4.9198
DMLD-Net 26.03 16.8061 5.1832
GPPNN 27.37 11.2643 4.4916
HyperPNN 26.10 17.5919 5.1762
DDLPS* 27.52 10.1478 4.3781
GSA* 25.30 10.4678 5.6518
Indusion* 25.84 10.4645 5.8683
PLRDiff* 27.39 11.6904 4.6245
SFIM* 24.99 10.4488 5.9011
uTDSP* 28.53 10.4176 4.2664

๐Ÿ›ฐ๏ธ PaviaU

Method PSNR SAM ERGAS
DBDENet 28.84 6.5032 2.7593
DHP-DARN 29.27 6.8826 2.5350
DIP-HyperKite 29.30 6.0972 2.5114
DMLD-Net 28.66 6.8624 2.7985
GPPNN 29.86 6.0788 2.4812
HyperPNN 28.96 6.7555 2.6472
DDLPS* 27.81 7.1405 4.3781
GSA* 26.47 7.2522 2.9323
Indusion* 25.82 7.8229 4.1539
PLRDiff* 28.57 7.5217 2.9453
SFIM* 25.66 7.8229 3.8125
uTDSP* 30.68 6.8019 2.4660

๐Ÿ›ฐ๏ธ Quantitative Results on Satellite Datasets

Metrics: PSNR โ†‘ (higher is better), SAM โ†“, ERGAS โ†“
(
denotes an unsupervised method)*


๐Ÿ›ฐ๏ธ Botswana

Method PSNR SAM ERGAS
DBDENet 22.84 8.5207 11.3979
DHP-DARN 28.85 4.9084 2.8164
DIP-HyperKite 30.24 4.8305 2.1305
DMLD-Net 26.87 6.5379 3.7552
GPPNN 26.44 8.6439 3.8965
HyperPNN 29.83 4.9803 2.2254
DDLPS* 22.27 6.9539 17.5198
GSA* 23.80 6.2035 11.6626
Indusion* 15.30 5.4225 9.7633
PLRDiff* 17.84 15.1475 9.0164
SFIM* 26.81 5.4225 2.7995
uTDSP* 31.61 3.7777 1.9155

๐Ÿ›ฐ๏ธ ZY1-02D

Method PSNR SAM ERGAS
DBDENet 11.39 22.3445 29.8753
DHP-DARN 14.71 7.9514 24.0358
DIP-HyperKite 19.73 2.1563 3.8904
DMLD-Net 13.41 13.4045 11.3059
GPPNN 14.35 12.1717 10.6018
HyperPNN 19.42 2.7872 5.5515
DDLPS* 26.03 1.8212 5.0919
GSA* 16.60 4.5868 20.0515
Indusion* 14.53 1.7358 6.7524
PLRDiff* 26.77 2.5636 2.7032
SFIM* 28.23 1.7358 1.6604
uTDSP* 31.22 1.7466 1.6917

๐Ÿ“š Citation

If you find this work helpful in your research, please cite:

@ARTICLE{11085108,
  author={Jiang, Hongcheng and Chen, ZhiQiang},
  journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing}, 
  title={Transformer-based Diffusion and Spectral Priors Model For Hyperspectral Pansharpening}, 
  year={2025},
  volume={},
  number={},
  pages={1-17},
  keywords={Hyperspectral imaging;Pansharpening;Diffusion models;Transformers;Estimation;Bayes methods;Noise reduction;Image reconstruction;Earth;Degradation;Hyperspectral imaging;pansharpening;spectral priors;diffusion model;transformer;remote sensing},
  doi={10.1109/JSTARS.2025.3590685}}

@inproceedings{jiang2025hyperspectral,
  title={Hyperspectral Pansharpening with Transformer-Based Spectral Diffusion Priors},
  author={Jiang, Hongcheng and Chen, ZhiQiang},
  booktitle={Proceedings of the Winter Conference on Applications of Computer Vision},
  pages={581--590},
  year={2025}
}

๐Ÿ“ฌ Contact

If you have any questions, feedback, or collaboration ideas, feel free to reach out:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages