Cited 62 time in
Cross-Modal Transformers for Infrared and Visible Image Fusion
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Park, Seonghyun | - |
| dc.contributor.author | Vien, An Gia | - |
| dc.contributor.author | Lee, Chul | - |
| dc.date.accessioned | 2024-08-08T08:01:42Z | - |
| dc.date.available | 2024-08-08T08:01:42Z | - |
| dc.date.issued | 2024-02 | - |
| dc.identifier.issn | 1051-8215 | - |
| dc.identifier.issn | 1558-2205 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/20231 | - |
| dc.description.abstract | Image fusion techniques aim to generate more informative images by merging multiple images of different modalities with complementary information. Despite significant fusion performance improvements of recent learning-based approaches, most fusion algorithms have been developed based on convolutional neural networks (CNNs), which stack deep layers to obtain a large receptive field for feature extraction. However, important details and contexts of the source images may be lost through a series of convolution layers. In this work, we propose a cross-modal transformer-based fusion (CMTFusion) algorithm for infrared and visible image fusion that captures global interactions by faithfully extracting complementary information from source images. Specifically, we first extract the multiscale feature maps of infrared and visible images. Then, we develop cross-modal transformers (CMTs) to retain complementary information in the source images by removing redundancies in both the spatial and channel domains. To this end, we design a gated bottleneck that integrates cross-domain interaction to consider the characteristics of the source images. Finally, a fusion result is obtained by exploiting spatial-channel information in refined feature maps using a fusion block. Experimental results on multiple datasets demonstrate that the proposed algorithm provides better fusion performance than state-of-the-art infrared and visible image fusion algorithms, both quantitatively and qualitatively. Furthermore, we show that the proposed algorithm can be used to improve the performance of computer vision tasks, e.g., object detection and monocular depth estimation. IEEE | - |
| dc.format.extent | 16 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.title | Cross-Modal Transformers for Infrared and Visible Image Fusion | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/TCSVT.2023.3289170 | - |
| dc.identifier.scopusid | 2-s2.0-85163442524 | - |
| dc.identifier.wosid | 001173373700008 | - |
| dc.identifier.bibliographicCitation | IEEE Transactions on Circuits and Systems for Video Technology, v.34, no.2, pp 770 - 785 | - |
| dc.citation.title | IEEE Transactions on Circuits and Systems for Video Technology | - |
| dc.citation.volume | 34 | - |
| dc.citation.number | 2 | - |
| dc.citation.startPage | 770 | - |
| dc.citation.endPage | 785 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordPlus | FACIAL EXPRESSION RECOGNITION | - |
| dc.subject.keywordPlus | REPRESENTATION | - |
| dc.subject.keywordPlus | ATTENTION | - |
| dc.subject.keywordPlus | FEATURES | - |
| dc.subject.keywordPlus | NETWORK | - |
| dc.subject.keywordPlus | JOINT | - |
| dc.subject.keywordAuthor | Computer vision | - |
| dc.subject.keywordAuthor | Convolution | - |
| dc.subject.keywordAuthor | Data mining | - |
| dc.subject.keywordAuthor | Feature extraction | - |
| dc.subject.keywordAuthor | Image fusion | - |
| dc.subject.keywordAuthor | Image fusion | - |
| dc.subject.keywordAuthor | infrared image | - |
| dc.subject.keywordAuthor | self-attention | - |
| dc.subject.keywordAuthor | Task analysis | - |
| dc.subject.keywordAuthor | transformer | - |
| dc.subject.keywordAuthor | Transformers | - |
| dc.subject.keywordAuthor | visible image | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
