Cited 42 time in
Action Recognition From Thermal Videos
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Batchuluun, Ganbayar | - |
| dc.contributor.author | Nguyen, Dat Tien | - |
| dc.contributor.author | Tuyen Danh Pham | - |
| dc.contributor.author | Park, Chanhum | - |
| dc.contributor.author | Park, Kang Ryoung | - |
| dc.date.accessioned | 2024-08-08T06:00:44Z | - |
| dc.date.available | 2024-08-08T06:00:44Z | - |
| dc.date.issued | 2019 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/18718 | - |
| dc.description.abstract | Human action recognition using a camera-based surveillance system remains a challenging task. In particular, action recognition is difficult when a human is not visible in an image captured in a dark environment. The existing studies have utilized near-infrared (NIR) and thermal cameras to solve this problem. Compared to NIR cameras, thermal cameras enable long- and short-distance objects to be visible without an additional illuminator. However, thermal cameras have two major disadvantages: a halo effect and a temperature similarity. A halo effect occurs around an object with a high temperature. In a human object, such a halo effect is similar to a shadow under the body area. It is more difficult to segment a human area from an image with a halo effect. Moreover, if the background and human object have similar temperatures, it becomes more difficult to segment the human area. These disadvantages influence not only the accuracy of the segmentation of the human area but also the performance of human action recognition. Unfortunately, no studies have considered these issues. To address these problems, this study proposes the cycle-consistent generative adversarial network (CycleGAN)-based methods for removing halo effects from thermal images and restoring the areas of the human bodies. In addition, this study also considered a method for creating a skeleton image from a thermal image to analyze body movements. To extract more spatial and temporal features from skeleton image sequences thus created, a method for human action recognition that combines a convolutional neural network (CNN) and long short-term memory (LSTM) was proposed. In an experiment using an open database (Dongguk activities & actions database (DA&A-DB2)), the proposed method demonstrated a better performance than the existing methods. | - |
| dc.format.extent | 25 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Action Recognition From Thermal Videos | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2019.2931804 | - |
| dc.identifier.scopusid | 2-s2.0-85078293958 | - |
| dc.identifier.wosid | 000481692400021 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp 103893 - 103917 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 7 | - |
| dc.citation.startPage | 103893 | - |
| dc.citation.endPage | 103917 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | ALGORITHM | - |
| dc.subject.keywordAuthor | Human action recognition | - |
| dc.subject.keywordAuthor | halo effect | - |
| dc.subject.keywordAuthor | image restoration and skeleton generation | - |
| dc.subject.keywordAuthor | thermal camera | - |
| dc.subject.keywordAuthor | CNN stacked LSTM | - |
| dc.subject.keywordAuthor | CycleGAN | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
