Cited 12 time in
Temporal Incoherence-Free Video Retargeting Using Foreground Aware Extrapolation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Cho, Sung In | - |
| dc.contributor.author | Kang, Suk-Ju | - |
| dc.date.accessioned | 2023-04-28T00:41:16Z | - |
| dc.date.available | 2023-04-28T00:41:16Z | - |
| dc.date.issued | 2020 | - |
| dc.identifier.issn | 1057-7149 | - |
| dc.identifier.issn | 1941-0042 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/7151 | - |
| dc.description.abstract | Video retargeting is a method of adjusting the aspect ratio of a given video to the target aspect ratio. However, temporal incoherence of video contents, which can occur frequently by video retargeting, is the most dominant factor that degrades the quality of retargeted videos. Current methods to maintain temporal coherence use the entire frames of the input videos; however, these methods cannot be implemented as on-time systems because of their tremendous computational complexity. As far as we know, there is no existing on-time video retargeting method that can avoid spatial distortion while perfectly maintaining temporal coherence. In this paper, we propose a novel on-time video retargeting method that can perfectly maintain temporal coherence and prevent the spatial distortion by using only two consecutive input frames. In our method, the maximum a posteriori-based foreground aware-block matching is used for the extrapolation that extends the side area of a given video to adjust its aspect ratio to the target. To maintain the temporal coherence of the extended area, the result of block matching for backward warping-based extrapolation of the start frame after the scene change occurs, is reused for the other frames until the next scene change occurs. In addition, we propose a scene scenario-adaptive fallback scheme to prevent severe distortions that can occur with reusing block matching results or extrapolation-based side extension. The simulation results showed that the proposed method greatly improved the bidirectional similarity value, which can measure the quality of video retargeting, by up to 10.26 compared with the existing on-time video retargeting methods. | - |
| dc.format.extent | 14 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
| dc.title | Temporal Incoherence-Free Video Retargeting Using Foreground Aware Extrapolation | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/TIP.2020.2977171 | - |
| dc.identifier.scopusid | 2-s2.0-85081720924 | - |
| dc.identifier.wosid | 000526524400002 | - |
| dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON IMAGE PROCESSING, v.29, pp 4848 - 4861 | - |
| dc.citation.title | IEEE TRANSACTIONS ON IMAGE PROCESSING | - |
| dc.citation.volume | 29 | - |
| dc.citation.startPage | 4848 | - |
| dc.citation.endPage | 4861 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.subject.keywordAuthor | Streaming media | - |
| dc.subject.keywordAuthor | Coherence | - |
| dc.subject.keywordAuthor | Extrapolation | - |
| dc.subject.keywordAuthor | Distortion | - |
| dc.subject.keywordAuthor | Strain | - |
| dc.subject.keywordAuthor | Two dimensional displays | - |
| dc.subject.keywordAuthor | Computational complexity | - |
| dc.subject.keywordAuthor | Video retargeting | - |
| dc.subject.keywordAuthor | MAP-based block matching | - |
| dc.subject.keywordAuthor | fallback | - |
| dc.subject.keywordAuthor | extrapolation | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
