Cited 1 time in
Neural Rendering-Based 3D Scene Style Transfer Method via Semantic Understanding Using a Single Style Image
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Park, Jisun | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2024-08-08T07:00:36Z | - |
| dc.date.available | 2024-08-08T07:00:36Z | - |
| dc.date.issued | 2023-07 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.issn | 2227-7390 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/19169 | - |
| dc.description.abstract | In the rapidly emerging era of untact ("contact-free") technologies, the requirement for three-dimensional (3D) virtual environments utilized in virtual reality (VR)/augmented reality (AR) and the metaverse has seen significant growth, owing to their extensive application across various domains. Current research focuses on the automatic transfer of the style of rendering images within a 3D virtual environment using artificial intelligence, which aims to minimize human intervention. However, the prevalent studies on rendering-based 3D environment-style transfers have certain inherent limitations. First, the training of a style transfer network dedicated to 3D virtual environments demands considerable style image data. These data must align with viewpoints that closely resemble those of the virtual environment. Second, there was noticeable inconsistency within the 3D structures. Predominant studies often neglect 3D scene geometry information instead of relying solely on 2D input image features. Finally, style adaptation fails to accommodate the unique characteristics inherent in each object. To address these issues, we propose a novel approach: a neural rendering-based 3D scene-style conversion technique. This methodology employs semantic nearest-neighbor feature matching, thereby facilitating the transfer of style within a 3D scene while considering the distinctive characteristics of each object, even when employing a single style image. The neural radiance field enables the network to comprehend the geometric information of a 3D scene in relation to its viewpoint. Subsequently, it transfers style features by employing the unique features of a single style image via semantic nearest-neighbor feature matching. In an empirical context, our proposed semantic 3D scene style transfer method was applied to 3D scene style transfers for both interior and exterior environments. This application utilizes the replica, 3DFront, and Tanks and Temples datasets for testing. The results illustrate that the proposed methodology surpasses existing style transfer techniques in terms of maintaining 3D viewpoint consistency, style uniformity, and semantic coherence. | - |
| dc.format.extent | 18 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Neural Rendering-Based 3D Scene Style Transfer Method via Semantic Understanding Using a Single Style Image | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/math11143243 | - |
| dc.identifier.scopusid | 2-s2.0-85175115226 | - |
| dc.identifier.wosid | 001036745100001 | - |
| dc.identifier.bibliographicCitation | Mathematics, v.11, no.14, pp 1 - 18 | - |
| dc.citation.title | Mathematics | - |
| dc.citation.volume | 11 | - |
| dc.citation.number | 14 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 18 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Mathematics | - |
| dc.subject.keywordAuthor | 3D style transfer | - |
| dc.subject.keywordAuthor | neural rendering | - |
| dc.subject.keywordAuthor | neural radiance fields | - |
| dc.subject.keywordAuthor | semantic feature matching | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
