Cited 0 time in
A Neuro-Symbolic Approach to Fall Detection via Monocular Depth Estimation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Xu, Yinghai | - |
| dc.contributor.author | Kim, Bongjun | - |
| dc.contributor.author | Wang, In-Nea | - |
| dc.contributor.author | Jeong, Junho | - |
| dc.date.accessioned | 2026-03-09T07:30:22Z | - |
| dc.date.available | 2026-03-09T07:30:22Z | - |
| dc.date.issued | 2026-02 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.issn | 2076-3417 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/63924 | - |
| dc.description.abstract | Falls remain a critical safety concern in surveillance settings, yet monocular RGB methods often degrade in multi-person scenes with occlusion and loss of three-dimensional cues. This study proposes a neuro-symbolic framework that restores physically interpretable depth proxies from monocular video and fuses them with skeleton-based spatio-temporal inference for robust fall detection. The pipeline estimates per-frame depth and 2D skeletons, recovers world coordinates for key joints, and derives absolute neck height and vertical descent rate for rule-based adjudication, while a neural method operates on joint trajectories; final decisions combine both streams with a logical policy and short-horizon temporal consistency. Experiments in a realistic indoor testbed with multi-person activity compare three configurations-neural, symbolic, and fused. The fused neuro-symbolic method achieved an accuracy of 0.88 and an F1 score of 0.76 on the real surveillance test set, outperforming the neural method alone (accuracy 0.81, F1 0.64) and the symbolic method alone (accuracy 0.77, F1 0.35). Gains arise from complementary error profiles: depth-derived, rule-based cues suppress spurious positives on non-fall frames, while the neural stream recovers true falls near rule boundaries. These findings indicate that integrating monocular depth proxies with interpretable rules improves reliability without additional sensors, supporting deployment in complex, multi-person surveillance environments. | - |
| dc.format.extent | 19 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | A Neuro-Symbolic Approach to Fall Detection via Monocular Depth Estimation | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/app16041895 | - |
| dc.identifier.scopusid | 2-s2.0-105031445035 | - |
| dc.identifier.wosid | 001699825100001 | - |
| dc.identifier.bibliographicCitation | Applied Sciences, v.16, no.4, pp 1 - 19 | - |
| dc.citation.title | Applied Sciences | - |
| dc.citation.volume | 16 | - |
| dc.citation.number | 4 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 19 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Materials Science | - |
| dc.relation.journalResearchArea | Physics | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
| dc.subject.keywordAuthor | fall detection | - |
| dc.subject.keywordAuthor | neuro-symbolic learning | - |
| dc.subject.keywordAuthor | monocular depth estimation | - |
| dc.subject.keywordAuthor | skeleton-based action recognition | - |
| dc.subject.keywordAuthor | spatio-temporal graph convolutional networks | - |
| dc.subject.keywordAuthor | video surveillance | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
