Cited 13 time in
Global–local feature learning for fine-grained food classification based on Swin Transformer
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Jun-Hwa | - |
| dc.contributor.author | Kim, Namho | - |
| dc.contributor.author | Won, Chee Sun | - |
| dc.date.accessioned | 2024-08-08T11:31:07Z | - |
| dc.date.available | 2024-08-08T11:31:07Z | - |
| dc.date.issued | 2024-07 | - |
| dc.identifier.issn | 0952-1976 | - |
| dc.identifier.issn | 1873-6769 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21694 | - |
| dc.description.abstract | Separable object parts, such as the head and tail in a bird, are vital for fine-grained visual classifications. For those objects without separable parts, the classification task relies only on local and global textural image features. Although the Swin Transformer architecture was proposed to efficiently capture both local and global visual features, it still exhibits a bias towards global features. Therefore, our goal is to enhance the local feature learning capability of the Swin Transformer by adding four new modules of the Local Feature Extraction Network (L-FEN), Convolution Patch-Merging (CP), Multi-Path (MP), and Multi-View (MV). The L-FEN enhances the Swin transformer with the improved local feature capture. The CP is a localized and hierarchical adaptation of the Swin's Patch Merging technique. The MP method integrates features across various Swin stages to accentuate local details. Meanwhile, the MV Swin transformer block supersedes traditional Swin blocks with those incorporating varied receptive fields, ensuring a broader scope of local feature capture. Our enhanced architecture, named Global–Local Swin Transformer (GL-Swin), is applied to solve a fine-grained food classification task. On three major food datasets: ISIA Food-500 UEC Food-256, and Food-101, our GL-Swin achieved accuracies of 66.75%, 85.78%, and 92.93% respectively, consistently outperforming other leading methods. © 2024 Elsevier Ltd | - |
| dc.format.extent | 7 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | Global–local feature learning for fine-grained food classification based on Swin Transformer | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.engappai.2024.108248 | - |
| dc.identifier.scopusid | 2-s2.0-85187783530 | - |
| dc.identifier.wosid | 001206555100001 | - |
| dc.identifier.bibliographicCitation | Engineering Applications of Artificial Intelligence, v.133, pp 1 - 7 | - |
| dc.citation.title | Engineering Applications of Artificial Intelligence | - |
| dc.citation.volume | 133 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 7 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Automation & Control Systems | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordAuthor | CNN | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | Fine-grained visual classification | - |
| dc.subject.keywordAuthor | Food dataset | - |
| dc.subject.keywordAuthor | Vision transformer | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
