Cited 6 time in
Digestive Organ Recognition in Video Capsule Endoscopy Based on Temporal Segmentation Network
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Shin, Yejee | - |
| dc.contributor.author | Eo, Taejoon | - |
| dc.contributor.author | Rha, Hyeongseop | - |
| dc.contributor.author | Oh, Dong Jun | - |
| dc.contributor.author | Son, Geonhui | - |
| dc.contributor.author | An, Jiwoong | - |
| dc.contributor.author | Kim, You Jin | - |
| dc.contributor.author | Hwang, Dosik | - |
| dc.contributor.author | Lim, Yun Jeong | - |
| dc.date.accessioned | 2023-04-27T14:40:20Z | - |
| dc.date.available | 2023-04-27T14:40:20Z | - |
| dc.date.issued | 2022-09 | - |
| dc.identifier.issn | 0302-9743 | - |
| dc.identifier.issn | 1611-3349 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/3886 | - |
| dc.description.abstract | The interpretation of video capsule endoscopy (VCE) usually takes more than an hour, which can be a tedious process for clinicians. To shorten the reading time of VCE, algorithms that automatically detect lesions in the small bowel are being actively developed, however, it is still necessary for clinicians to manually mark anatomic transition points in VCE. Therefore, anatomical temporal segmentation must first be performed automatically at the full-length VCE level for the fully automated reading. This study aims to develop an automated organ recognition method in VCE based on a temporal segmentation network. For temporal locating and classifying organs including the stomach, small bowel, and colon in long untrimmed videos, we use MS-TCN++ model containing temporal convolution layers. To improve temporal segmentation performance, a hybrid model of two state-of-the-art feature extraction models (i.e., TimeSformer and I3D) is used. Extensive experiments showed the effectiveness of the proposed method in capturing long-range dependencies and recognizing temporal segments of organs. For training and validation of the proposed model, the dataset of 200 patients (100 normal and 100 abnormal VCE) was used. For the test set of 40 patients (20 normal and 20 abnormal VCE), the proposed method showed accuracy of 96.15, F1-score@ {50,75,90} of {96.17, 93.61, 86.80}, and segmental edit distance of 95.83 in the three-class classification of organs including the stomach, small bowel, and colon in the full-length VCE. | - |
| dc.format.extent | 11 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Springer Cham | - |
| dc.title | Digestive Organ Recognition in Video Capsule Endoscopy Based on Temporal Segmentation Network | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.1007/978-3-031-16449-1_14 | - |
| dc.identifier.scopusid | 2-s2.0-85139073357 | - |
| dc.identifier.wosid | 000867568000014 | - |
| dc.identifier.bibliographicCitation | Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, v.13437, pp 136 - 146 | - |
| dc.citation.title | Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 | - |
| dc.citation.volume | 13437 | - |
| dc.citation.startPage | 136 | - |
| dc.citation.endPage | 146 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
| dc.relation.journalResearchArea | Radiology, Nuclear Medicine & Medical Imaging | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
| dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
| dc.relation.journalWebOfScienceCategory | Radiology, Nuclear Medicine & Medical Imaging | - |
| dc.subject.keywordAuthor | Video Capsule Endoscopy | - |
| dc.subject.keywordAuthor | Organ recognition | - |
| dc.subject.keywordAuthor | Temporal segmentation | - |
| dc.subject.keywordAuthor | Temporal convolutional networks | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
