Cited 17 time in
End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Hong, Joonki | - |
| dc.contributor.author | Tran, Hai Hong | - |
| dc.contributor.author | Jung, Jinhwan | - |
| dc.contributor.author | Jang, Hyeryung | - |
| dc.contributor.author | Lee, Dongheon | - |
| dc.contributor.author | Yoon, In-Young | - |
| dc.contributor.author | Hong, Jung Kyung | - |
| dc.contributor.author | Kim, Jeong-Whun | - |
| dc.date.accessioned | 2023-04-27T14:40:19Z | - |
| dc.date.available | 2023-04-27T14:40:19Z | - |
| dc.date.issued | 2022-06 | - |
| dc.identifier.issn | 1179-1608 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/3884 | - |
| dc.description.abstract | Purpose: Nocturnal sounds contain numerous information and are easily obtainable by a non-contact manner. Sleep staging using nocturnal sounds recorded from common mobile devices may allow daily at-home sleep tracking. The objective of this study is to introduce an end-to-end (sound-to-sleep stages) deep learning model for sound-based sleep staging designed to work with audio from microphone chips, which are essential in mobile devices such as modern smartphones. Patients and Methods: Two different audio datasets were used: audio data routinely recorded by a solitary microphone chip during polysomnography (PSG dataset, N=1154) and audio data recorded by a smartphone (smartphone dataset, N=327). The audio was converted into Mel spectrogram to detect latent temporal frequency patterns of breathing and body movement from ambient noise. The proposed neural network model learns to first extract features from each 30-second epoch and then analyze inter-epoch relationships of extracted features to finally classify the epochs into sleep stages. Results: Our model achieved 70% epoch-by-epoch agreement for 4-class (wake, light, deep, REM) sleep stage classification and robust performance across various signal-to-noise conditions. The model performance was not considerably affected by sleep apnea or periodic limb movement. External validation with smartphone dataset also showed 68% epoch-by-epoch agreement. Conclusion: The proposed end-to-end deep learning model shows potential of low-quality sounds recorded from microphone chips to be utilized for sleep staging. Future study using nocturnal sounds recorded from mobile devices at home environment may further confirm the use of mobile device recording as an at-home sleep tracker. | - |
| dc.format.extent | 15 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Dove Medical Press Ltd. | - |
| dc.title | End-to-End Sleep Staging Using Nocturnal Sounds from Microphone Chips for Mobile Devices | - |
| dc.type | Article | - |
| dc.publisher.location | 뉴질랜드 | - |
| dc.identifier.doi | 10.2147/NSS.S361270 | - |
| dc.identifier.scopusid | 2-s2.0-85133274850 | - |
| dc.identifier.wosid | 000836468400001 | - |
| dc.identifier.bibliographicCitation | Nature and Science of Sleep, v.14, pp 1187 - 1201 | - |
| dc.citation.title | Nature and Science of Sleep | - |
| dc.citation.volume | 14 | - |
| dc.citation.startPage | 1187 | - |
| dc.citation.endPage | 1201 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Neurosciences & Neurology | - |
| dc.relation.journalWebOfScienceCategory | Clinical Neurology | - |
| dc.relation.journalWebOfScienceCategory | Neurosciences | - |
| dc.subject.keywordAuthor | respiratory sounds | - |
| dc.subject.keywordAuthor | sleep stages | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | smartphone | - |
| dc.subject.keywordAuthor | polysomnography | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
