Cited 2 time in
Switchable-Encoder-Based Self-Supervised Learning Framework for Monocular Depth and Pose Estimation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kim, Junoh | - |
| dc.contributor.author | Gao, Rui | - |
| dc.contributor.author | Park, Jisun | - |
| dc.contributor.author | Yoon, Jinsoo | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2024-08-08T13:32:33Z | - |
| dc.date.available | 2024-08-08T13:32:33Z | - |
| dc.date.issued | 2023-12 | - |
| dc.identifier.issn | 2072-4292 | - |
| dc.identifier.issn | 2072-4292 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/22718 | - |
| dc.description.abstract | Monocular depth prediction research is essential for expanding meaning from 2D to 3D. Recent studies have focused on the application of a newly proposed encoder; however, the development within the self-supervised learning framework remains unexplored, an aspect critical for advancing foundational models of 3D semantic interpretation. Addressing the dynamic nature of encoder-based research, especially in performance evaluations for feature extraction and pre-trained models, this research proposes the switchable encoder learning framework (SELF). SELF enhances versatility by enabling the seamless integration of diverse encoders in a self-supervised learning context for depth prediction. This integration is realized through the direct transfer of feature information from the encoder and by standardizing the input structure of the decoder to accommodate various encoder architectures. Furthermore, the framework is extended and incorporated into an adaptable decoder for depth prediction and camera pose learning, employing standard loss functions. Comparative experiments with previous frameworks using the same encoder reveal that SELF achieves a 7% reduction in parameters while enhancing performance. Remarkably, substituting newly proposed algorithms in place of an encoder improves the outcomes as well as significantly decreases the number of parameters by 23%. The experimental findings highlight the ability of SELF to broaden depth factors, such as depth consistency. This framework facilitates the objective selection of algorithms as a backbone for extended research in monocular depth prediction. © 2023 by the authors. | - |
| dc.format.extent | 25 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Switchable-Encoder-Based Self-Supervised Learning Framework for Monocular Depth and Pose Estimation | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/rs15245739 | - |
| dc.identifier.scopusid | 2-s2.0-85180617293 | - |
| dc.identifier.wosid | 001130708000001 | - |
| dc.identifier.bibliographicCitation | Remote Sensing, v.15, no.24, pp 1 - 25 | - |
| dc.citation.title | Remote Sensing | - |
| dc.citation.volume | 15 | - |
| dc.citation.number | 24 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 25 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Environmental Sciences & Ecology | - |
| dc.relation.journalResearchArea | Geology | - |
| dc.relation.journalResearchArea | Remote Sensing | - |
| dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
| dc.relation.journalWebOfScienceCategory | Environmental Sciences | - |
| dc.relation.journalWebOfScienceCategory | Geosciences, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Remote Sensing | - |
| dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
| dc.subject.keywordAuthor | monocular depth estimation | - |
| dc.subject.keywordAuthor | self-supervised learning | - |
| dc.subject.keywordAuthor | structure from motion | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
