Detailed Information

Cited 2 time in webofscience Cited 2 time in scopus
Metadata Downloads

Affective social big data generation algorithm for autonomous controls by CRNN-based end-to-end controls

Authors
Kwak, JeonghoonPark, Jong HyukSung, Yunsick
Issue Date
Oct-2019
Publisher
SPRINGER
Keywords
Affective social big data; Multimedia data; Deep learning; Convolutional recurrent neural network
Citation
MULTIMEDIA TOOLS AND APPLICATIONS, v.78, no.19, pp 27175 - 27192
Pages
18
Indexed
SCIE
SCOPUS
Journal Title
MULTIMEDIA TOOLS AND APPLICATIONS
Volume
78
Number
19
Start Page
27175
End Page
27192
URI
https://scholarworks.dongguk.edu/handle/sw.dongguk/7582
DOI
10.1007/s11042-019-7703-4
ISSN
1380-7501
1573-7721
Abstract
Affective social multimedia computing provides us the opportunity to improve our daily lives. Various things, such as devices in ubiquitous computing environments and autonomous vehicles in real environments considering human beings, can be controlled by analyzing and learning affective social big data. Deep learning is a core learning algorithm for autonomous control; however, it requires huge amounts of learning data, and the process of collecting various types of learning data is expensive. The collection limit of affective social videos for deep learning is resolved by analyzing affective social videos, such as YouTube and Closed Circuit Television (CCTV) videos collected in advance, and generating new affective social videos more as learning data without human beings autonomously controlling other cameras. The control signals of the cameras are generated by Convolutional Neural Network (CNN)-based end-to-end controls. However, images captured consecutively need to be analyzed to improve the quality of the generated control signals. This paper proposes a system that generates affective social videos for deep learning by Convolutional Recurrent Neural Network (CRNN)-based end-to-end controls. The extracted images in affective social videos are utilized for calculating the control signals based on the CRNN. Additional affective social videos are then generated by the extracted consecutive images and camera control signals. The effectiveness of the proposed method was verified in the experiments by comparing the results obtained using the proposed method with those obtained using the traditional CNN. The results showed that the accuracy of the control signals obtained using the proposed method was 56.30% higher than that of the control signals obtained using the traditional CNN.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Sung, Yunsick photo

Sung, Yunsick
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE