LM-CLIP: Adapting Positive Asymmetric Loss for Long-Tailed Multi-Label Classificationopen access
- Authors
- Timmermann, Christoph; Jung, Seunghyeon; Kim, Miso; Lee, Woojin
- Issue Date
- 2025
- Publisher
- IEEE
- Keywords
- Heavily-tailed distribution; Head; Tail; Multi label classification; Training; Visualization; Adaptation models; Focusing; Tuning; Optimization; Long-tailed learning; multi-label classification; CLIP; vision-language models; contrastive learning; class imbalance; loss functions; asymmetric loss; balanced asymmetric loss; imbalanced sampling
- Citation
- IEEE Access, v.13, pp 71053 - 71065
- Pages
- 13
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE Access
- Volume
- 13
- Start Page
- 71053
- End Page
- 71065
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/58288
- DOI
- 10.1109/ACCESS.2025.3561581
- ISSN
- 2169-3536
2169-3536
- Abstract
- Accurate multi-label image classification is essential for real-world applications, especially in scenarios with long-tailed class distributions, where some classes appear frequently while others are rare. This imbalance often leads to biased models that struggle to accurately recognize underrepresented classes. Existing methods either trade off performance between head and tail classes or rely on image captions, limiting adaptability. To address these limitations, we propose LM-CLIP, a novel framework built around a unified loss function. Our Balanced Asymmetric Loss (BAL) extends traditional asymmetric loss by emphasizing the gradients of rare positive samples where the model is uncertain, mitigating bias toward dominant classes. This is complemented by a contrastive loss that pushes negative samples further from the decision boundary, creating a more optimal embedding space even in long-tailed scenarios. These loss functions together ensure balanced performance across all classes. Our framework is built on pre-trained models utilizing textual and visual features from millions of image-text pairs. Furthermore, we incorporate a dynamic sampling strategy that prioritizes rare classes based on their occurrence, which ensures effective training without compromising overall performance. Experiments conducted on VOC-MLT and COCO-MLT benchmarks demonstrate the effectiveness of our approach, achieving +4.66% and +8.14% improvements in mean Average Precision (mAP) over state-of-the-art methods. Our code is publicly available at https://github.com/damilab/lm-clip.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.