• español
    • English
  • Login
  • español 
    • español
    • English
  • Tipos de Publicaciones
    • bookbook partconference objectdoctoral thesisjournal articlemagazinemaster thesispatenttechnical documentationtechnical report
Ver ítem 
  •   IMDEA Networks Principal
  • Ver ítem
  •   IMDEA Networks Principal
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Middle-Output Deep Image Prior for Blind Hyperspectral and Multispectral Image Fusion

Compartir
Ficheros
FV_Middle_Output_Deep_Image_Prior (1).pdf (1.835Mb)
Identificadores
URI: https://hdl.handle.net/20.500.12761/1895
ISSN: 0923-5965
DOI: 10.1016/j.image.2024.117247
Metadatos
Mostrar el registro completo del ítem
Autor(es)
Bacca, Jorge; Arcos, Cristian; Ramirez, Juan Marcos; Arguello, Henry
Fecha
2025-03-31
Resumen
Obtaining a low-spatial-resolution hyperspectral image (HS) or low-spectral-resolution multispectral (MS) image from a high-resolution (HR) spectral image is straightforward with knowledge of the acquisition models. However, the reverse process, from HS and MS to HR, is an ill-posed problem known as spectral image fusion.} Although recent fusion techniques based on supervised deep learning have shown promising results, these methods require large training datasets involving expensive acquisition costs and long training times. In contrast, unsupervised HS and MS image fusion methods \baccac{have emerged as an alternative to data demand issues; however, they rely on the knowledge of the linear degradation models for optimal performance.} To overcome these challenges, we propose the Middle-Output Deep Image Prior (MODIP) for unsupervised blind HS and MS image fusion. \baccac{MODIP is adjusted for the HS and MS images, and the HR fused image is estimated at} an intermediate layer within the network. The architecture comprises two convolutional \baccac{networks} that reconstruct the HR spectral image from HS and MS inputs, along with two networks that appropriately downscale the estimated HR image to match the available MS and HS \baccac{images}, learning the non-linear degradation models. The network parameters of MODIP are jointly and iteratively adjusted by minimizing a proposed loss function. This approach can handle scenarios where the degradation operators are unknown or partially estimated. To evaluate the performance of MODIP, we test the fusion approach on three simulated spectral image datasets (Pavia University, Salinas Valley, and CAVE) and a real dataset obtained through a testbed implementation in an optical lab. Extensive simulations demonstrate that MODIP outperforms other unsupervised model-based image fusion methods \baccac{by up to 6 dB in PNSR.
Compartir
Ficheros
FV_Middle_Output_Deep_Image_Prior (1).pdf (1.835Mb)
Identificadores
URI: https://hdl.handle.net/20.500.12761/1895
ISSN: 0923-5965
DOI: 10.1016/j.image.2024.117247
Metadatos
Mostrar el registro completo del ítem

Listar

Todo IMDEA NetworksPor fecha de publicaciónAutoresTítulosPalabras claveTipos de contenido

Mi cuenta

Acceder

Estadísticas

Ver Estadísticas de uso

Difusión

emailContacto person Directorio wifi Eduroam rss_feed Noticias
Iniciativa IMDEA Sobre IMDEA Networks Organización Memorias anuales Transparencia
Síguenos en:
Comunidad de Madrid

UNIÓN EUROPEA

Fondo Social Europeo

UNIÓN EUROPEA

Fondo Europeo de Desarrollo Regional

UNIÓN EUROPEA

Fondos Estructurales y de Inversión Europeos

© 2021 IMDEA Networks. | Declaración de accesibilidad | Política de Privacidad | Aviso legal | Política de Cookies - Valoramos su privacidad: ¡este sitio no utiliza cookies!