Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

LM-CLIP: Adapting Positive Asymmetric Loss for Long-Tailed Multi-Label Classification

Full metadata record
DC Field Value Language
dc.contributor.authorTimmermann, Christoph-
dc.contributor.authorJung, Seunghyeon-
dc.contributor.authorKim, Miso-
dc.contributor.authorLee, Woojin-
dc.date.accessioned2025-05-13T02:00:15Z-
dc.date.available2025-05-13T02:00:15Z-
dc.date.issued2025-
dc.identifier.issn2169-3536-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/58288-
dc.description.abstractAccurate multi-label image classification is essential for real-world applications, especially in scenarios with long-tailed class distributions, where some classes appear frequently while others are rare. This imbalance often leads to biased models that struggle to accurately recognize underrepresented classes. Existing methods either trade off performance between head and tail classes or rely on image captions, limiting adaptability. To address these limitations, we propose LM-CLIP, a novel framework built around a unified loss function. Our Balanced Asymmetric Loss (BAL) extends traditional asymmetric loss by emphasizing the gradients of rare positive samples where the model is uncertain, mitigating bias toward dominant classes. This is complemented by a contrastive loss that pushes negative samples further from the decision boundary, creating a more optimal embedding space even in long-tailed scenarios. These loss functions together ensure balanced performance across all classes. Our framework is built on pre-trained models utilizing textual and visual features from millions of image-text pairs. Furthermore, we incorporate a dynamic sampling strategy that prioritizes rare classes based on their occurrence, which ensures effective training without compromising overall performance. Experiments conducted on VOC-MLT and COCO-MLT benchmarks demonstrate the effectiveness of our approach, achieving +4.66% and +8.14% improvements in mean Average Precision (mAP) over state-of-the-art methods. Our code is publicly available at https://github.com/damilab/lm-clip.-
dc.format.extent13-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-
dc.titleLM-CLIP: Adapting Positive Asymmetric Loss for Long-Tailed Multi-Label Classification-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1109/ACCESS.2025.3561581-
dc.identifier.scopusid2-s2.0-105002815455-
dc.identifier.wosid001479454700050-
dc.identifier.bibliographicCitationIEEE Access, v.13, pp 71053 - 71065-
dc.citation.titleIEEE Access-
dc.citation.volume13-
dc.citation.startPage71053-
dc.citation.endPage71065-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.subject.keywordAuthorHeavily-tailed distribution-
dc.subject.keywordAuthorHead-
dc.subject.keywordAuthorTail-
dc.subject.keywordAuthorMulti label classification-
dc.subject.keywordAuthorTraining-
dc.subject.keywordAuthorVisualization-
dc.subject.keywordAuthorAdaptation models-
dc.subject.keywordAuthorFocusing-
dc.subject.keywordAuthorTuning-
dc.subject.keywordAuthorOptimization-
dc.subject.keywordAuthorLong-tailed learning-
dc.subject.keywordAuthormulti-label classification-
dc.subject.keywordAuthorCLIP-
dc.subject.keywordAuthorvision-language models-
dc.subject.keywordAuthorcontrastive learning-
dc.subject.keywordAuthorclass imbalance-
dc.subject.keywordAuthorloss functions-
dc.subject.keywordAuthorasymmetric loss-
dc.subject.keywordAuthorbalanced asymmetric loss-
dc.subject.keywordAuthorimbalanced sampling-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Woo Jin photo

Lee, Woo Jin
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE