Cited 7 time in
Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chung, Joowon | - |
| dc.contributor.author | Oh, Dong Jun | - |
| dc.contributor.author | Park, Junseok | - |
| dc.contributor.author | Kim, Su Hwan | - |
| dc.contributor.author | Lim, Yun Jeong | - |
| dc.date.accessioned | 2024-08-08T08:31:16Z | - |
| dc.date.available | 2024-08-08T08:31:16Z | - |
| dc.date.issued | 2023-04 | - |
| dc.identifier.issn | 2075-4418 | - |
| dc.identifier.issn | 2075-4418 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/20557 | - |
| dc.description.abstract | The first step in reading a capsule endoscopy (CE) is determining the gastrointestinal (GI) organ. Because CE produces too many inappropriate and repetitive images, automatic organ classification cannot be directly applied to CE videos. In this study, we developed a deep learning algorithm to classify GI organs (the esophagus, stomach, small bowel, and colon) using a no-code platform, applied it to CE videos, and proposed a novel method to visualize the transitional area of each GI organ. We used training data (37,307 images from 24 CE videos) and test data (39,781 images from 30 CE videos) for model development. This model was validated using 100 CE videos that included "normal", "blood", "inflamed", "vascular", and "polypoid" lesions. Our model achieved an overall accuracy of 0.98, precision of 0.89, recall of 0.97, and F1 score of 0.92. When we validated this model relative to the 100 CE videos, it produced average accuracies for the esophagus, stomach, small bowel, and colon of 0.98, 0.96, 0.87, and 0.87, respectively. Increasing the AI score's cut-off improved most performance metrics in each organ (p < 0.05). To locate a transitional area, we visualized the predicted results over time, and setting the cut-off of the AI score to 99.9% resulted in a better intuitive presentation than the baseline. In conclusion, the GI organ classification AI model demonstrated high accuracy on CE videos. The transitional area could be more easily located by adjusting the cut-off of the AI score and visualization of its result over time. | - |
| dc.format.extent | 13 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/diagnostics13081389 | - |
| dc.identifier.scopusid | 2-s2.0-85153715747 | - |
| dc.identifier.wosid | 000978192600001 | - |
| dc.identifier.bibliographicCitation | Diagnostics, v.13, no.8, pp 1 - 13 | - |
| dc.citation.title | Diagnostics | - |
| dc.citation.volume | 13 | - |
| dc.citation.number | 8 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 13 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | General & Internal Medicine | - |
| dc.relation.journalWebOfScienceCategory | Medicine, General & Internal | - |
| dc.subject.keywordPlus | ARTIFICIAL-INTELLIGENCE | - |
| dc.subject.keywordPlus | BOWEL | - |
| dc.subject.keywordPlus | LESIONS | - |
| dc.subject.keywordAuthor | capsule endoscopy | - |
| dc.subject.keywordAuthor | artificial intelligence | - |
| dc.subject.keywordAuthor | automatic organ classification | - |
| dc.subject.keywordAuthor | automated machine learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
