Detailed Information

Cited 54 time in webofscience Cited 62 time in scopus
Metadata Downloads

Cross-Modal Transformers for Infrared and Visible Image Fusion

Authors
Park, SeonghyunVien, An GiaLee, Chul
Issue Date
Feb-2024
Publisher
Institute of Electrical and Electronics Engineers
Keywords
Computer vision; Convolution; Data mining; Feature extraction; Image fusion; Image fusion; infrared image; self-attention; Task analysis; transformer; Transformers; visible image
Citation
IEEE Transactions on Circuits and Systems for Video Technology, v.34, no.2, pp 770 - 785
Pages
16
Indexed
SCIE
SCOPUS
Journal Title
IEEE Transactions on Circuits and Systems for Video Technology
Volume
34
Number
2
Start Page
770
End Page
785
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/20231
DOI
10.1109/TCSVT.2023.3289170
ISSN
1051-8215
1558-2205
Abstract
Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutional neural networks (CNNs), which stack deep layers to obtain a large receptive field for feature extraction. However, important details and contexts of the source images may be lost through a series of convolution layers. In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images. Specifically, we first extract the multiscale feature maps of infrared and visible images. Then, we develop cross-modal transformers (CMTs) to retain complementary information in the source images by removing redundancies in both the spatial and channel domains. To this end, we design a gated bottleneck that integrates cross-domain interaction to consider the characteristics of the source images. Finally, a fusion result is obtained by exploiting spatial-channel information in refined feature maps using a fusion block. Experimental results on multiple datasets demonstrate that the proposed algorithm provides better fusion performance than state-of-the-art infrared and visible image fusion algorithms, both quantitatively and qualitatively. Furthermore, we show that the proposed algorithm can be used to improve the performance of computer vision tasks, e.g., object detection and monocular depth estimation. IEEE
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Chul photo

Lee, Chul
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE