Global–local feature learning for fine-grained food classification based on Swin Transformer
- Authors
- Kim, Jun-Hwa; Kim, Namho; Won, Chee Sun
- Issue Date
- Jul-2024
- Publisher
- Elsevier Ltd
- Keywords
- CNN; Deep learning; Fine-grained visual classification; Food dataset; Vision transformer
- Citation
- Engineering Applications of Artificial Intelligence, v.133, pp 1 - 7
- Pages
- 7
- Indexed
- SCIE
SCOPUS
- Journal Title
- Engineering Applications of Artificial Intelligence
- Volume
- 133
- Start Page
- 1
- End Page
- 7
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/21694
- DOI
- 10.1016/j.engappai.2024.108248
- ISSN
- 0952-1976
1873-6769
- Abstract
- Separable object parts, such as the head and tail in a bird, are vital for fine-grained visual classifications. For those objects without separable parts, the classification task relies only on local and global textural image features. Although the Swin Transformer architecture was proposed to efficiently capture both local and global visual features, it still exhibits a bias towards global features. Therefore, our goal is to enhance the local feature learning capability of the Swin Transformer by adding four new modules of the Local Feature Extraction Network (L-FEN), Convolution Patch-Merging (CP), Multi-Path (MP), and Multi-View (MV). The L-FEN enhances the Swin transformer with the improved local feature capture. The CP is a localized and hierarchical adaptation of the Swin's Patch Merging technique. The MP method integrates features across various Swin stages to accentuate local details. Meanwhile, the MV Swin transformer block supersedes traditional Swin blocks with those incorporating varied receptive fields, ensuring a broader scope of local feature capture. Our enhanced architecture, named Global–Local Swin Transformer (GL-Swin), is applied to solve a fine-grained food classification task. On three major food datasets: ISIA Food-500 UEC Food-256, and Food-101, our GL-Swin achieved accuracies of 66.75%, 85.78%, and 92.93% respectively, consistently outperforming other leading methods. © 2024 Elsevier Ltd
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > Department of Electronics and Electrical Engineering > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.