• español
    • English
  • Login
  • English 
    • español
    • English
  • Publication Types
    • bookbook partconference objectdoctoral thesisjournal articlemagazinemaster thesispatenttechnical documentationtechnical report
View Item 
  •   IMDEA Networks Home
  • View Item
  •   IMDEA Networks Home
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Middle-Output Deep Image Prior for Blind Hyperspectral and Multispectral Image Fusion

Share
Files
FV_Middle_Output_Deep_Image_Prior (1).pdf (1.835Mb)
Identifiers
URI: https://hdl.handle.net/20.500.12761/1895
ISSN: 0923-5965
DOI: 10.1016/j.image.2024.117247
Metadata
Show full item record
Author(s)
Bacca, Jorge; Arcos, Cristian; Ramirez, Juan Marcos; Arguello, Henry
Date
2025-03-31
Abstract
Obtaining a low-spatial-resolution hyperspectral image (HS) or low-spectral-resolution multispectral (MS) image from a high-resolution (HR) spectral image is straightforward with knowledge of the acquisition models. However, the reverse process, from HS and MS to HR, is an ill-posed problem known as spectral image fusion.} Although recent fusion techniques based on supervised deep learning have shown promising results, these methods require large training datasets involving expensive acquisition costs and long training times. In contrast, unsupervised HS and MS image fusion methods \baccac{have emerged as an alternative to data demand issues; however, they rely on the knowledge of the linear degradation models for optimal performance.} To overcome these challenges, we propose the Middle-Output Deep Image Prior (MODIP) for unsupervised blind HS and MS image fusion. \baccac{MODIP is adjusted for the HS and MS images, and the HR fused image is estimated at} an intermediate layer within the network. The architecture comprises two convolutional \baccac{networks} that reconstruct the HR spectral image from HS and MS inputs, along with two networks that appropriately downscale the estimated HR image to match the available MS and HS \baccac{images}, learning the non-linear degradation models. The network parameters of MODIP are jointly and iteratively adjusted by minimizing a proposed loss function. This approach can handle scenarios where the degradation operators are unknown or partially estimated. To evaluate the performance of MODIP, we test the fusion approach on three simulated spectral image datasets (Pavia University, Salinas Valley, and CAVE) and a real dataset obtained through a testbed implementation in an optical lab. Extensive simulations demonstrate that MODIP outperforms other unsupervised model-based image fusion methods \baccac{by up to 6 dB in PNSR.
Share
Files
FV_Middle_Output_Deep_Image_Prior (1).pdf (1.835Mb)
Identifiers
URI: https://hdl.handle.net/20.500.12761/1895
ISSN: 0923-5965
DOI: 10.1016/j.image.2024.117247
Metadata
Show full item record

Browse

All of IMDEA NetworksBy Issue DateAuthorsTitlesKeywordsTypes of content

My Account

Login

Statistics

View Usage Statistics

Dissemination

emailContact person Directory wifi Eduroam rss_feed News
IMDEA initiative About IMDEA Networks Organizational structure Annual reports Transparency
Follow us in:
Community of Madrid

EUROPEAN UNION

European Social Fund

EUROPEAN UNION

European Regional Development Fund

EUROPEAN UNION

European Structural and Investment Fund

© 2021 IMDEA Networks. | Accesibility declaration | Privacy Policy | Disclaimer | Cookie policy - We value your privacy: this site uses no cookies!