Cited 0 time in
CNCAN: Contrast and normal channel attention network for super-resolution image reconstruction of crops and weeds
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Sung Jae | - |
| dc.contributor.author | Yun, Chaeyeong | - |
| dc.contributor.author | Im, Su Jin | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2024-11-04T04:30:19Z | - |
| dc.date.available | 2024-11-04T04:30:19Z | - |
| dc.date.issued | 2024-12 | - |
| dc.identifier.issn | 0952-1976 | - |
| dc.identifier.issn | 1873-6769 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/56151 | - |
| dc.description.abstract | Numerous studies have been performed to apply camera vision technologies in robot-based agriculture and smart farms. In particular, to obtain high accuracy, it is essential to procure high-resolution (HR) images, which requires a high-performance camera. However, due to high costs it is difficult to widely apply the camera in agricultural robots. To overcome this limitation, we propose contrast and normal channel attention network (CNCAN) for super-resolution reconstruction (SR), which is the first research for the accurate semantic segmentation of crops and weeds even with low-resolution (LR) images captured by low-cost and LR camera. Attention block and activation function that considers high frequency and contrast information of images are used in CNCAN, and the residual connection method is applied to improve the learning stability. As a result of experimenting with three open datasets, namely, Bonirob, rice seedling and weed, and crop/ weed field image (CWFID) datasets, the mean intersection of union (MIOU) results of semantic segmentation for crops and weeds with SR images through CNCAN were 0.7685, 0.6346, and 0.6931 in the Bonirob, rice seedling and weed, and CWFID datasets, respectively, confirming higher accuracy than other state-of-the-art methods for SR. | - |
| dc.format.extent | 21 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | CNCAN: Contrast and normal channel attention network for super-resolution image reconstruction of crops and weeds | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.engappai.2024.109487 | - |
| dc.identifier.scopusid | 2-s2.0-85206681913 | - |
| dc.identifier.wosid | 001340014800001 | - |
| dc.identifier.bibliographicCitation | Engineering Applications of Artificial Intelligence, v.138, no.Part B, pp 1 - 21 | - |
| dc.citation.title | Engineering Applications of Artificial Intelligence | - |
| dc.citation.volume | 138 | - |
| dc.citation.number | Part B | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 21 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Automation & Control Systems | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordPlus | SEMANTIC SEGMENTATION | - |
| dc.subject.keywordPlus | AGRICULTURE | - |
| dc.subject.keywordAuthor | Low-resolution images | - |
| dc.subject.keywordAuthor | Super-resolution reconstruction | - |
| dc.subject.keywordAuthor | Semantic segmentation | - |
| dc.subject.keywordAuthor | Crops and weeds images | - |
| dc.subject.keywordAuthor | Contrast and normal channel attention | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
