Cited 8 time in
CAM-CAN: Class activation map-based categorical adversarial network
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Batchuluun, Ganbayar | - |
| dc.contributor.author | Choi, Jiho | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2024-08-08T10:01:33Z | - |
| dc.date.available | 2024-08-08T10:01:33Z | - |
| dc.date.issued | 2023-07 | - |
| dc.identifier.issn | 0957-4174 | - |
| dc.identifier.issn | 1873-6793 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/21256 | - |
| dc.description.abstract | Numerous studies have investigated image classification. In particular, recent methods based on deep learning have exhibited high accuracies. However, various existing state-of-the-art methods based on deep learning show different accuracies depending on the database and environment. Accordingly, different deep learning models need to be used in image classification studies according to the database, environment, and research field. This study investigated a technique to increase the accuracy of the existing deep learning-based models. The proposed method was applied to various existing state-of-the-art methods. In the proposed method, a convolution neural network (CNN) is trained using the classification activation map (CAM) to focus on specific areas in the input image. The CAM image is used as the ground-truth image. Furthermore, the concept of the CAM-based cate-gorical adversarial network (CAM-CAN), in which the CNN is trained based on a generative adversarial network, is proposed in this paper. An action recognition experiment was performed using the self-collected Dongguk thermal image database (DTh-DB) and open database, and the results revealed that the accuracies of the existing state-of-the-art methods significantly increased after applying the proposed method. For instance, the accuracies obtained using the DTh-DB, TPR, PPV, ACC, and F1 with the conventional DenseNet201 model were 80.14%, 75.28%, 96.0%, and 75.91%, respectively. After applying the proposed method, the accuracies increased to 86.53%, 89.90%, 97.64%, and 85.84%, respectively. | - |
| dc.format.extent | 24 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Elsevier Ltd | - |
| dc.title | CAM-CAN: Class activation map-based categorical adversarial network | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1016/j.eswa.2023.119809 | - |
| dc.identifier.scopusid | 2-s2.0-85150041138 | - |
| dc.identifier.wosid | 000956597400001 | - |
| dc.identifier.bibliographicCitation | Expert Systems with Applications, v.222, pp 1 - 24 | - |
| dc.citation.title | Expert Systems with Applications | - |
| dc.citation.volume | 222 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 24 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Operations Research & Management Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Operations Research & Management Science | - |
| dc.subject.keywordPlus | RECOGNITION | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | CNN | - |
| dc.subject.keywordAuthor | CAM-CAN | - |
| dc.subject.keywordAuthor | GAN | - |
| dc.subject.keywordAuthor | Action recognition | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
