Cited 22 time in
An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Dongyi | - |
| dc.contributor.author | Cho, Hyeon | - |
| dc.contributor.author | Shin, Hochul | - |
| dc.contributor.author | Lim, Soo-Chul | - |
| dc.contributor.author | Hwang, Wonjun | - |
| dc.date.accessioned | 2024-08-08T03:30:37Z | - |
| dc.date.available | 2024-08-08T03:30:37Z | - |
| dc.date.issued | 2019-08-20 | - |
| dc.identifier.issn | 1424-8220 | - |
| dc.identifier.issn | 1424-3210 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/16918 | - |
| dc.description.abstract | Interaction forces are traditionally predicted by a contact type haptic sensor. In this paper, we propose a novel and practical method for inferring the interaction forces between two objects based only on video data-one of the non-contact type camera sensors-without the use of common haptic sensors. In detail, we could predict the interaction force by observing the texture changes of the target object by an external force. For this purpose, our hypothesis is that a three-dimensional (3D) convolutional neural network (CNN) can be made to predict the physical interaction forces from video images. In this paper, we proposed a bottleneck-based 3D depthwise separable CNN architecture where the video is disentangled into spatial and temporal information. By applying the basic depthwise convolution concept to each video frame, spatial information can be efficiently learned; for temporal information, the 3D pointwise convolution can be used to learn the linear combination among sequential frames. To validate and train the proposed model, we collected large quantities of datasets, which are video clips of the physical interactions between two objects under different conditions (illumination and angle variations) and the corresponding interaction forces measured by the haptic sensor (as the ground truth). Our experimental results confirmed our hypothesis; when compared with previous models, the proposed model was more accurate and efficient, and although its model size was 10 times smaller, the 3D convolutional neural network architecture exhibited better accuracy. The experiments demonstrate that the proposed model remains robust under different conditions and can successfully estimate the interaction force between objects. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/s19163579 | - |
| dc.identifier.scopusid | 2-s2.0-85071522902 | - |
| dc.identifier.wosid | 000484407200136 | - |
| dc.identifier.bibliographicCitation | SENSORS, v.19, no.16 | - |
| dc.citation.title | SENSORS | - |
| dc.citation.volume | 19 | - |
| dc.citation.number | 16 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Instruments & Instrumentation | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | force estimation | - |
| dc.subject.keywordAuthor | interaction force | - |
| dc.subject.keywordAuthor | convolutional neural network | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
