Cited 1 time in
Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Wen, Mingyun | - |
| dc.contributor.author | Park, Jisun | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2023-04-27T15:40:31Z | - |
| dc.date.available | 2023-04-27T15:40:31Z | - |
| dc.date.issued | 2021-11 | - |
| dc.identifier.issn | 2072-4292 | - |
| dc.identifier.issn | 2072-4292 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/4254 | - |
| dc.description.abstract | This study focuses on reconstructing accurate meshes with high-resolution textures from single images. The reconstruction process involves two networks: a mesh-reconstruction network and a texture-reconstruction network. The mesh-reconstruction network estimates a deformation map, which is used to deform a template mesh to the shape of the target object in the input image, and a low-resolution texture. We propose reconstructing a mesh with a high-resolution texture by enhancing the low-resolution texture through use of the super-resolution method. The architecture of the texture-reconstruction network is like that of a generative adversarial network comprising a generator and a discriminator. During the training of the texture-reconstruction network, the discriminator must focus on learning high-quality texture predictions and to ignore the difference between the generated mesh and the actual mesh. To achieve this objective, we used meshes reconstructed using the mesh-reconstruction network and textures generated through inverse rendering to generate pseudo-ground-truth images. We conducted experiments using the 3D-Future dataset, and the results prove that our proposed approach can be used to generate improved three-dimensional (3D) textured meshes compared to existing methods, both quantitatively and qualitatively. Additionally, through our proposed approach, the texture of the output image is significantly improved. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Textured Mesh Generation Using Multi-View and Multi-Source Supervision and Generative Adversarial Networks | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/rs13214254 | - |
| dc.identifier.scopusid | 2-s2.0-85118145666 | - |
| dc.identifier.wosid | 000721147100001 | - |
| dc.identifier.bibliographicCitation | REMOTE SENSING, v.13, no.21 | - |
| dc.citation.title | REMOTE SENSING | - |
| dc.citation.volume | 13 | - |
| dc.citation.number | 21 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Environmental Sciences & Ecology | - |
| dc.relation.journalResearchArea | Geology | - |
| dc.relation.journalResearchArea | Remote Sensing | - |
| dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
| dc.relation.journalWebOfScienceCategory | Environmental Sciences | - |
| dc.relation.journalWebOfScienceCategory | Geosciences, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Remote Sensing | - |
| dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
| dc.subject.keywordAuthor | single image textured mesh reconstruction | - |
| dc.subject.keywordAuthor | convolutional neural networks | - |
| dc.subject.keywordAuthor | generative adversarial network | - |
| dc.subject.keywordAuthor | super-resolution | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
