Cited 0 time in
Multi-scale attention in attention neural network for single image deblurring
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Ho Sub | - |
| dc.contributor.author | Cho, Sung In | - |
| dc.date.accessioned | 2024-11-11T08:30:22Z | - |
| dc.date.available | 2024-11-11T08:30:22Z | - |
| dc.date.issued | 2024-12 | - |
| dc.identifier.issn | 0141-9382 | - |
| dc.identifier.issn | 1872-7387 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/56207 | - |
| dc.description.abstract | Image deblurring, which eliminates blurring artifacts to recover details from a given input image, represents an important task for the computer vision field. Recently, the attention mechanism with deep neural networks (DNN) demonstrates promising performance of image deblurring. However, they have difficulty learning complex blurry and sharp relationships through a balance of spatial detail and high-level contextualized information. Moreover, most existing attention-based DNN methods fail to selectively exploit the information from attention and non-attention branches. To address these challenges, we propose a new approach called Multi-Scale Attention in Attention (MSAiA) for image deblurring. MSAiA incorporates dynamic weight generation by leveraging the joint dependencies of channel and spatial information, allowing for adaptive changes to the weight values in attention and non-attention branches. In contrast to existing attention mechanisms that primarily consider channel or spatial dependencies and do not adequately utilize the information from attention and non-attention branches, our proposed AiA design combines channel-spatial attention. This attention mechanism effectively utilizes the dependencies between channel-spatial information to allocate weight values for attention and non-attention branches, enabling the full utilization of information from both branches. Consequently, the attention branch can more effectively incorporate useful information, while the non-attention branch avoids less useful information. Additionally, we employ a novel multi-scale neural network that aims to learn the relationships between blurring artifacts and the original sharp image by further exploiting multi-scale information. The experimental results prove that the proposed MSAiA achieves superior deblurring performance compared with the state-of-the-art methods. | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier BV | - |
| dc.title | Multi-scale attention in attention neural network for single image deblurring | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.displa.2024.102860 | - |
| dc.identifier.scopusid | 2-s2.0-85207316086 | - |
| dc.identifier.wosid | 001346820700001 | - |
| dc.identifier.bibliographicCitation | Displays, v.85, pp 1 - 14 | - |
| dc.citation.title | Displays | - |
| dc.citation.volume | 85 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 14 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Instruments & Instrumentation | - |
| dc.relation.journalResearchArea | Optics | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Hardware & Architecture | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
| dc.relation.journalWebOfScienceCategory | Optics | - |
| dc.subject.keywordPlus | MODEL | - |
| dc.subject.keywordPlus | DARK | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | Image deblurring | - |
| dc.subject.keywordAuthor | Attention in attention | - |
| dc.subject.keywordAuthor | Channel attention | - |
| dc.subject.keywordAuthor | Spatial attention | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
