Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Depth Prior-Guided 3D Voxel Feature Fusion for 3D Semantic Estimation from Monocular Videos

Full metadata record
DC Field Value Language
dc.contributor.authorWen, Mingyun-
dc.contributor.authorCho, Kyungeun-
dc.date.accessioned2024-08-08T13:32:27Z-
dc.date.available2024-08-08T13:32:27Z-
dc.date.issued2024-07-
dc.identifier.issn2227-7390-
dc.identifier.issn2227-7390-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/22702-
dc.description.abstractExisting 3D semantic scene reconstruction methods utilize the same set of features extracted from deep learning networks for both 3D semantic estimation and geometry reconstruction, ignoring the differing requirements of semantic segmentation and geometry construction tasks. Additionally, current methods allocate 2D image features to all voxels along camera rays during the back-projection process, without accounting for empty or occluded voxels. To address these issues, we propose separating the features for 3D semantic estimation from those for 3D mesh reconstruction. We use a pretrained vision transformer network for image feature extraction and depth priors estimated by a pretrained multi-view stereo-network to guide the allocation of image features within 3D voxels during the back-projection process. The back-projected image features are aggregated within each 3D voxel via averaging, creating coherent voxel features. The resulting 3D feature volume, composed of unified voxel feature vectors, is fed into a 3D CNN with a semantic classification head to produce a 3D semantic volume. This volume can be combined with existing 3D mesh reconstruction networks to produce a 3D semantic mesh. Experimental results on real-world datasets demonstrate that the proposed method significantly increases 3D semantic estimation accuracy.-
dc.format.extent11-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI-
dc.titleDepth Prior-Guided 3D Voxel Feature Fusion for 3D Semantic Estimation from Monocular Videos-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/math12132114-
dc.identifier.scopusid2-s2.0-85198439562-
dc.identifier.wosid001269659300001-
dc.identifier.bibliographicCitationMathematics, v.12, no.13, pp 1 - 11-
dc.citation.titleMathematics-
dc.citation.volume12-
dc.citation.number13-
dc.citation.startPage1-
dc.citation.endPage11-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaMathematics-
dc.relation.journalWebOfScienceCategoryMathematics-
dc.subject.keywordPlusRECONSTRUCTION-
dc.subject.keywordPlusTRACKING-
dc.subject.keywordAuthor3D semantic scene reconstruction-
dc.subject.keywordAuthordepth priors-
dc.subject.keywordAuthorvision transformer-
dc.subject.keywordAuthormulti-view stereo-network-
dc.subject.keywordAuthorvoxel feature fusion-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Advanced Convergence Engineering > Department of Computer Science and Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Cho, Kyung Eun photo

Cho, Kyung Eun
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE