Cited 14 time in
Deep Edge Computing for Videos
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Jun-Hwa | - |
| dc.contributor.author | Kim, Namho | - |
| dc.contributor.author | Won, Chee Sun | - |
| dc.date.accessioned | 2023-04-27T19:41:02Z | - |
| dc.date.available | 2023-04-27T19:41:02Z | - |
| dc.date.issued | 2021 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/5659 | - |
| dc.description.abstract | This paper provides a modular architecture with deep neural networks as a solution for real-time video analytics in an edge-computing environment. The modular architecture consists of two networks of Front-CNN (Convolutional Neural Network) and Back-CNN, where we adopt Shallow 3D CNN (S3D) as the Front-CNN and a pre-trained 2D CNN as the Back-CNN. The S3D (i.e., the Front CNN) is in charge of condensing a sequence of video frames into a feature map with three channels. That is, the S3D takes a set of sequential frames in the video shot as input and yields a learned 3 channel feature map (3CFM) as output. Since the 3CFM is compatible with the three-channel RGB color image format, we can use the output of the S3D (i.e., the 3CFM) as the input to a pre-trained 2D CNN of the Back-CNN for the transfer-learning. This serial connection of Front-CNN and Back-CNN architecture is end-to-end trainable to learn both spatial and temporal information of videos. Experimental results on the public datasets of UCF-Crime and UR-Fall Detection show that the proposed S3D-2DCNN model outperforms the existing methods and achieves state-of-the-art performance. Moreover, since our Front-CNN and Back-CNN modules have a shallow S3D and a light-weighted 2D CNN, respectively, it is suitable for real-time video recognition in edge-computing environments. We have implemented our CNN model on NVIDIA Jetson Nano Developer as an edge-computing device to show its real-time execution. | - |
| dc.format.extent | 10 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Deep Edge Computing for Videos | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2021.3109904 | - |
| dc.identifier.scopusid | 2-s2.0-85114728980 | - |
| dc.identifier.wosid | 000696077500001 | - |
| dc.identifier.bibliographicCitation | IEEE ACCESS, v.9, pp 123348 - 123357 | - |
| dc.citation.title | IEEE ACCESS | - |
| dc.citation.volume | 9 | - |
| dc.citation.startPage | 123348 | - |
| dc.citation.endPage | 123357 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordPlus | FALL DETECTION | - |
| dc.subject.keywordPlus | SURVEILLANCE | - |
| dc.subject.keywordAuthor | Convolutional neural networks | - |
| dc.subject.keywordAuthor | Three-dimensional displays | - |
| dc.subject.keywordAuthor | Streaming media | - |
| dc.subject.keywordAuthor | Cameras | - |
| dc.subject.keywordAuthor | Optical imaging | - |
| dc.subject.keywordAuthor | Image edge detection | - |
| dc.subject.keywordAuthor | Optical computing | - |
| dc.subject.keywordAuthor | Edge computing | - |
| dc.subject.keywordAuthor | CNN | - |
| dc.subject.keywordAuthor | the IoT | - |
| dc.subject.keywordAuthor | anomaly detection | - |
| dc.subject.keywordAuthor | video recognition | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
