Cited 0 time in
MiMics-Net: A Multimodal Interaction Network for Blastocyst Component Segmentation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Haider, Adnan | - |
| dc.contributor.author | Arsalan, Muhammad | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2026-03-10T00:30:14Z | - |
| dc.date.available | 2026-03-10T00:30:14Z | - |
| dc.date.issued | 2026-02 | - |
| dc.identifier.issn | 2075-4418 | - |
| dc.identifier.issn | 2075-4418 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/63932 | - |
| dc.description.abstract | Objectives: Global infertility rates are rapidly increasing. Assisted reproductive technologies combined with artificial intelligence are the next hope for overcoming infertility. In vitro fertilization (IVF) is gaining popularity owing to its increasing success rates. The success rate of IVF essentially depends on the assessment and inspection of blastocysts. Blastocysts can be segmented into several important compartments, and advanced and precise assessment of these compartments is strongly associated with successful pregnancies. However, currently, embryologists must manually analyze blastocysts, which is a time-consuming, subjective, and error-prone process. Several AI-based techniques, including segmentation, have been recently proposed to fill this gap. However, most existing methods rely only on raw grayscale intensity and do not perform well under challenging blastocyst image conditions, such as low contrast, similarity in textures, shape variability, and class imbalance. Methods: To overcome this limitation, we developed a novel and lightweight architecture, the microscopic multimodal interaction segmentation network (MiMics-Net), to accurately segment blastocyst components. MiMics-Net employs a multimodal blastocyst stem to decompose and process each frame into three modalities (photometric intensity, local textures, and directional orientation), followed by feature fusion to enhance segmentation performance. Moreover, MiMic dual-path grouped blocks have been designed, in which parallel-grouped convolutional paths are fused through point-wise convolutional layers to increase diverse learning. A lightweight refinement decoder is employed to refine and restore the spatial features while maintaining computational efficiency. Finally, semantic skip pathways are induced to transfer low- and mid-level spatial features after passing through the grouped and point-wise convolutional layers. Results/Conclusions: MiMics-Net was evaluated using a publicly available human blastocyst dataset and achieved a Jaccard index score of 87.9% while requiring only 0.65 million trainable parameters. © 2026 by the authors. | - |
| dc.format.extent | 16 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI | - |
| dc.title | MiMics-Net: A Multimodal Interaction Network for Blastocyst Component Segmentation | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/diagnostics16040631 | - |
| dc.identifier.scopusid | 2-s2.0-105031279077 | - |
| dc.identifier.wosid | 001701457100001 | - |
| dc.identifier.bibliographicCitation | Diagnostics, v.16, no.4, pp 1 - 16 | - |
| dc.citation.title | Diagnostics | - |
| dc.citation.volume | 16 | - |
| dc.citation.number | 4 | - |
| dc.citation.startPage | 1 | - |
| dc.citation.endPage | 16 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | General & Internal Medicine | - |
| dc.relation.journalWebOfScienceCategory | Medicine, General & Internal | - |
| dc.subject.keywordAuthor | artificial intelligence | - |
| dc.subject.keywordAuthor | blastocyst segmentation | - |
| dc.subject.keywordAuthor | medical image analysis | - |
| dc.subject.keywordAuthor | multimodal segmentation | - |
| dc.subject.keywordAuthor | semantic segmentation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
