Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Generating, retrieving persona and generating responses for long-term open-domain dialogue

Full metadata record
DC Field Value Language
dc.contributor.authorCha, Dohyun-
dc.contributor.authorLee, Dawon-
dc.contributor.authorKim, Jihie-
dc.date.accessioned2025-07-28T06:01:18Z-
dc.date.available2025-07-28T06:01:18Z-
dc.date.issued2025-07-
dc.identifier.issn2376-5992-
dc.identifier.issn2376-5992-
dc.identifier.urihttps://scholarworks.dongguk.edu/handle/sw.dongguk/58784-
dc.description.abstractOpen-domain dialogue systems have shown remarkable capabilities in generating natural and consistent responses in short-term conversations. However, in long-term conversations such as multi-session chat (MSC), where the dialogue history exceeds the model's maximum input length (i.e., 1024 tokens), existing dialogue generation systems often overlook the information from earlier dialogues, leading to the loss of context. To prevent such loss and generate natural, consistent responses, we propose a GRGPerDialogue framework, consisting of three main stages: generating persona from past dialogues, retrieving persona relevant to the current utterance, and generating responses based on both persona and recent dialogues. In the first stage, we generate the persona of each speaker in real-time with diverse expressions, leveraging Llama 2 In-Context Learning (ICL). Subsequently, we propose a new dataset called Persona-Utterance Pair (PUP) and use it to train Facebook dense passage retrieval (DPR) model for retrieving persona sentences relevant to the current utterance. Finally, we train generative models such as Generative Pre-trained Transformer 2 (GPT-2) and Bidirectional and Auto-Regressive Transformers (BART) to generate responses based on retrieved persona sentences and the recent dialogues. Experimental results on a long-term dialogue dataset demonstrate that the GRGPerDialogue framework outperforms baseline models by approximately 0.6% to 1% in terms of the Rouge-1 metric. Furthermore, human evaluation results supported the effectiveness of GRGPerDialogue. These results indicate that GRGPerDialogue can generate responses that are not only more fluent and consistent, but also more relevant to the dialogue history than baseline models.-
dc.language영어-
dc.language.isoENG-
dc.publisherPEERJ INC-
dc.titleGenerating, retrieving persona and generating responses for long-term open-domain dialogue-
dc.typeArticle-
dc.publisher.location영국-
dc.identifier.doi10.7717/peerj-cs.2979-
dc.identifier.scopusid2-s2.0-105025471737-
dc.identifier.wosid001531885400001-
dc.identifier.bibliographicCitationPeerJ Computer Science, v.11-
dc.citation.titlePeerJ Computer Science-
dc.citation.volume11-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.subject.keywordAuthorNatural language processing-
dc.subject.keywordAuthorOpen-domain dialogue-
dc.subject.keywordAuthorDialogue generation systems-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Ji Hie photo

Kim, Ji Hie
College of Advanced Convergence Engineering (Department of Computer Science and Artificial Intelligence)
Read more

Altmetrics

Total Views & Downloads

BROWSE