Cited 4 time in
Which LSTM Type is Better for Interaction Force Estimation?
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Cho, Hyeon | - |
| dc.contributor.author | Kim, Hyungho | - |
| dc.contributor.author | Ko, Dae-Kwan | - |
| dc.contributor.author | Lim, Soo-Chul | - |
| dc.contributor.author | Hwang, Wonjun | - |
| dc.date.accessioned | 2023-04-28T05:42:34Z | - |
| dc.date.available | 2023-04-28T05:42:34Z | - |
| dc.date.issued | 2019-11 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/8636 | - |
| dc.description.abstract | Tactile, one of the five senses classified into the main senses of human, is the first sensation developed when human beings are formed. The tactile includes various information such as pressure, temperature, and texture of objects, it also helps the person to interact with the surrounding environment. One of the tactile information, the pressure is used in various fields such as medical, beauty, mobile devices and so on. However, humans can perceive the real world with multi-modal senses such as sound, vision. In this paper, we study interaction force estimation using haptic sensor and video. Interaction force estimation through video analysis is one of a cross-modal approach that is applicable such as a software haptic feedback method that can give haptic feedback to remote control of robot arm by predicting interaction force even in absence of haptic sensor. we compare and analyze three types of a deep neural network to predict the interaction force. In particular, the best model for the stacking structure of CNN and LSTM is selected through a detailed analysis of how the structure change of LSTM affects the video regression problem. The average error of the best suit model is MSE 0.1306, RMSE 0.2740, MAE 0.1878. | - |
| dc.format.extent | 6 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Which LSTM Type is Better for Interaction Force Estimation? | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/RITAPP.2019.8932854 | - |
| dc.identifier.scopusid | 2-s2.0-85077983408 | - |
| dc.identifier.wosid | 000526059800011 | - |
| dc.identifier.bibliographicCitation | 2019 7TH INTERNATIONAL CONFERENCE ON ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS (RITA), pp 61 - 66 | - |
| dc.citation.title | 2019 7TH INTERNATIONAL CONFERENCE ON ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS (RITA) | - |
| dc.citation.startPage | 61 | - |
| dc.citation.endPage | 66 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Robotics | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Robotics | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
