Cited 7 time in
LCW-Net: Low-light-image-based crop and weed segmentation network using attention module in two decoders
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Yu Hwan | - |
| dc.contributor.author | Lee, Sung Jae | - |
| dc.contributor.author | Yun, Chaeyeong | - |
| dc.contributor.author | Im, Su Jin | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2024-08-08T10:00:35Z | - |
| dc.date.available | 2024-08-08T10:00:35Z | - |
| dc.date.issued | 2023-11 | - |
| dc.identifier.issn | 0952-1976 | - |
| dc.identifier.issn | 1873-6769 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21043 | - |
| dc.description.abstract | Crop segmentation using cameras is commonly used in large agricultural areas, but the time and duration of crop harvesting varies in large farms. Considering this situation, there is a need for low-light image-based segmentation of crop and weed images for late-time harvesting, but no prior research has considered this. As a first study on this topic, we propose a low-light image-based crop and weed segmentation network (LCW-Net) that uses an attention module in two decoders to perform only one step without restoration of low-light images. We also design a loss function to accurately segment regions of objects, crops, and weeds in low-light images to avoid training overfitting and balance the learning task for object, crop, and weed segmentation. There are no existing low-light public databases, and it is difficult to obtain ground truth segmentation information for self-collected database in low-light environments. Therefore, we experimented with converting two public databases, the crop and weed field image dataset (CWFID) and BoniRob dataset, into low-light datasets. The experimental results showed that the mean intersection of unions (mIoUs) of segmentation for crops and weeds were 0.8718 and 0.8693 for the BoniRob dataset, respectively, and 0.8337 and 0.8221 for the CWFID dataset, respectively, indicating that LCW-Net outperforms the state-of-the-art methods. © 2023 The Author(s) | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | LCW-Net: Low-light-image-based crop and weed segmentation network using attention module in two decoders | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.engappai.2023.106890 | - |
| dc.identifier.scopusid | 2-s2.0-85170434249 | - |
| dc.identifier.wosid | 001080353700001 | - |
| dc.identifier.bibliographicCitation | Engineering Applications of Artificial Intelligence, v.126, pp 1 - 14 | - |
| dc.citation.title | Engineering Applications of Artificial Intelligence | - |
| dc.citation.volume | 126 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 14 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Automation & Control Systems | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordAuthor | Attention module in two decoders | - |
| dc.subject.keywordAuthor | LCW-Net | - |
| dc.subject.keywordAuthor | Low-light image | - |
| dc.subject.keywordAuthor | Semantic segmentation for crops and weeds | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
