Cited 0 time in
Empirical and Comparative Study of Long-Sequence Video Consistency in AIGC
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 장예한 | - |
| dc.contributor.author | 선심이 | - |
| dc.contributor.author | 정진헌 | - |
| dc.date.accessioned | 2026-03-04T03:00:16Z | - |
| dc.date.available | 2026-03-04T03:00:16Z | - |
| dc.date.issued | 2026-02 | - |
| dc.identifier.issn | 1598-2009 | - |
| dc.identifier.issn | 2287-738X | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/63865 | - |
| dc.description.abstract | With the rise of generative AI, AI-based video synthesis has emerged as a transformative tool in film, advertising, and new media. However, complex scenes continue to face challenges, such as temporal discontinuity, lack of physical consistency, and style shifts. This study conducts a comparative analysis of JiMeng, Vidu, and Keling AI across six scenarios: forest/animals, city/street, indoor/people, beach/nature, sci-fi/city, and product/exhibition. Using unified prompts and a standardized frame-continuity strategy, 5-s videos (16:9) were generated under default settings. Results show that JiMeng performs best in urban, sci-fi, and product scenes; Keling excels in natural environments; and Vidu stands out in indoor character expressions. This study proposes a platform evaluation paradigm and highlights scenario-specific strengths, providing essential technical guidance for creative applications in the AI-driven media landscape. | - |
| dc.format.extent | 10 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | 한국디지털콘텐츠학회 | - |
| dc.title | Empirical and Comparative Study of Long-Sequence Video Consistency in AIGC | - |
| dc.title.alternative | AIGC 장시간 시퀀스 영상 일관성 실증 및 비교연구 | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.doi | 10.9728/dcs.2026.27.2.347 | - |
| dc.identifier.bibliographicCitation | 디지털콘텐츠학회논문지, v.27, no.2, pp 347 - 356 | - |
| dc.citation.title | 디지털콘텐츠학회논문지 | - |
| dc.citation.volume | 27 | - |
| dc.citation.number | 2 | - |
| dc.citation.startPage | 347 | - |
| dc.citation.endPage | 356 | - |
| dc.type.docType | Y | - |
| dc.identifier.kciid | ART003305627 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | AI Video Generation | - |
| dc.subject.keywordAuthor | Content Coherence | - |
| dc.subject.keywordAuthor | Physical Consistency | - |
| dc.subject.keywordAuthor | Style Stability | - |
| dc.subject.keywordAuthor | Sequential Video | - |
| dc.subject.keywordAuthor | AI 영상 생성 | - |
| dc.subject.keywordAuthor | 내용 연속성 | - |
| dc.subject.keywordAuthor | 물리적 일관성 | - |
| dc.subject.keywordAuthor | 스타일 안정성 | - |
| dc.subject.keywordAuthor | 연속 영상 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
