Cited 2 time in
Affective social big data generation algorithm for autonomous controls by CRNN-based end-to-end controls
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kwak, Jeonghoon | - |
| dc.contributor.author | Park, Jong Hyuk | - |
| dc.contributor.author | Sung, Yunsick | - |
| dc.date.accessioned | 2023-04-28T02:40:46Z | - |
| dc.date.available | 2023-04-28T02:40:46Z | - |
| dc.date.issued | 2019-10 | - |
| dc.identifier.issn | 1380-7501 | - |
| dc.identifier.issn | 1573-7721 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/7582 | - |
| dc.description.abstract | Affective social multimedia computing provides us the opportunity to improve our daily lives. Various things, such as devices in ubiquitous computing environments and autonomous vehicles in real environments considering human beings, can be controlled by analyzing and learning affective social big data. Deep learning is a core learning algorithm for autonomous control; however, it requires huge amounts of learning data, and the process of collecting various types of learning data is expensive. The collection limit of affective social videos for deep learning is resolved by analyzing affective social videos, such as YouTube and Closed Circuit Television (CCTV) videos collected in advance, and generating new affective social videos more as learning data without human beings autonomously controlling other cameras. The control signals of the cameras are generated by Convolutional Neural Network (CNN)-based end-to-end controls. However, images captured consecutively need to be analyzed to improve the quality of the generated control signals. This paper proposes a system that generates affective social videos for deep learning by Convolutional Recurrent Neural Network (CRNN)-based end-to-end controls. The extracted images in affective social videos are utilized for calculating the control signals based on the CRNN. Additional affective social videos are then generated by the extracted consecutive images and camera control signals. The effectiveness of the proposed method was verified in the experiments by comparing the results obtained using the proposed method with those obtained using the traditional CNN. The results showed that the accuracy of the control signals obtained using the proposed method was 56.30% higher than that of the control signals obtained using the traditional CNN. | - |
| dc.format.extent | 18 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | SPRINGER | - |
| dc.title | Affective social big data generation algorithm for autonomous controls by CRNN-based end-to-end controls | - |
| dc.type | Article | - |
| dc.publisher.location | 네델란드 | - |
| dc.identifier.doi | 10.1007/s11042-019-7703-4 | - |
| dc.identifier.scopusid | 2-s2.0-85065409036 | - |
| dc.identifier.wosid | 000485298000018 | - |
| dc.identifier.bibliographicCitation | MULTIMEDIA TOOLS AND APPLICATIONS, v.78, no.19, pp 27175 - 27192 | - |
| dc.citation.title | MULTIMEDIA TOOLS AND APPLICATIONS | - |
| dc.citation.volume | 78 | - |
| dc.citation.number | 19 | - |
| dc.citation.startPage | 27175 | - |
| dc.citation.endPage | 27192 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordAuthor | Affective social big data | - |
| dc.subject.keywordAuthor | Multimedia data | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | Convolutional recurrent neural network | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
