Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Complete Object-Compositional Neural Implicit Surfaces with 3D Pseudo Supervision

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Wongyeom-
dc.contributor.authorPark, Jisun-
dc.contributor.authorCho, Kyungeun-
dc.date.accessioned2025-03-12T03:00:12Z-
dc.date.available2025-03-12T03:00:12Z-
dc.date.issued2025-
dc.identifier.issn2169-3536-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/57911-
dc.description.abstractNeural implicit surface reconstruction has recently emerged as a prominent paradigm in multi-view 3D reconstruction using deep learning. In contrast to traditional multi-view stereo methods, signed distance function (SDF)-based approaches leverage neural networks to effectively represent 3D scenes. Furthermore, to reconstruct scenes and individual objects separately, some studies have extended the framework for object-compositional neural implicit surface reconstruction by utilizing 2D instance masks to supervise the SDF of each object. Nonetheless, these methods often reconstruct objects as partial shapes in scenes captured from sparse viewpoints or in complex scenes containing multiple objects. This issue primarily stems from the absence of a 3D prior, which fails to provide sufficient geometry for partially observed and occluded regions. We propose a framework for completing the partial object shapes of object-compositional neural implicit representation utilizing a diffusion-based 3D mesh generation model. The existing diffusion model, trained only on large-scale 3D object datasets, generates complete shapes from partial shapes; however, their results differ significantly from the objects in the scene. To complete the representation of partial shapes while ensuring shape consistency across multi-view images, we combine the SDF values, output by the diffusion model, with the object-compositional neural implicit representation. The combined representation is then volume-rendered to fine-tune the diffusion model utilizing a 2D prior. Furthermore, the complete shape generated by our method can serve as pseudo 3D priors to provide the geometry for the unobserved regions in object-compositional representation. Extensive experiments demonstrate that our novel framework significantly improves the reconstruction quality of unobserved regions. © 2013 IEEE.-
dc.format.extent11-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-
dc.titleComplete Object-Compositional Neural Implicit Surfaces with 3D Pseudo Supervision-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1109/ACCESS.2025.3544705-
dc.identifier.scopusid2-s2.0-86000787048-
dc.identifier.wosid001494114100001-
dc.identifier.bibliographicCitationIEEE Access, v.13, pp 36151 - 36161-
dc.citation.titleIEEE Access-
dc.citation.volume13-
dc.citation.startPage36151-
dc.citation.endPage36161-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaTelecommunications-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryTelecommunications-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorMesh generation-
dc.subject.keywordAuthorSurface reconstruction-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Cho, Kyung Eun photo

Cho, Kyung Eun
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE