Mostrar el registro sencillo del ítem
A few-shot learning method based on knowledge graph in large language models
dc.contributor.author | Wang, FeiLong | |
dc.contributor.author | Shi, Donghui | |
dc.contributor.author | Aguilar, Jose | |
dc.contributor.author | Cui, Xinyi | |
dc.date.accessioned | 2025-01-13T16:11:30Z | |
dc.date.available | 2025-01-13T16:11:30Z | |
dc.date.issued | 2024-12-15 | |
dc.identifier.issn | 2364-415X | es |
dc.identifier.uri | https://hdl.handle.net/20.500.12761/1886 | |
dc.description.abstract | The emergence of large language models has significantly transformed natural language processing and text generation. Fine-tuning these models for specific domains enables them to generate answers tailored to the unique requirements of those fields, such as in legal or medical domains. However, these models often perform poorly in few-shot scenarios. Herein, the challenges of data scarcity in fine-tuning large language models in low-sample scenarios were addressed by proposing three different KDGI (Knowledge-Driven Dialog Generation Instances) generation strategies, including entity-based KDGI generation, relation-based KDGI generation, and semantic-based multi-level KDGI generation. These strategies aimed to enhance few-shot datasets to address the issue of low fine-tuning metrics caused by insufficient data. Specifically, knowledge graphs were utilized to define the distinct KDGI generation strategies for enhancing few-shot data. Subsequently, these KDGI data were employed to fine-tune the large language model using the P-tuning v2 approach. Through multiple experiments, the effectiveness of the three KDGI generation strategies was validated using BLEU and ROUGE metrics, and the fine-tuning benefits of few-shot learning on large language models were confirmed. To further evaluate the effectiveness of KDGI, additional experiments were conducted, including LoRA-based fine-tuning in the medical domain and comparative studies leveraging Mask Language Model augmentation, back-translation, and noise injection methods. Consequently, the paper proposes a reference method for leveraging knowledge graphs in prompt data engineering, which shows potential in facilitating few-shot learning for fine-tuning large language models. | es |
dc.language.iso | eng | es |
dc.publisher | Springer | es |
dc.title | A few-shot learning method based on knowledge graph in large language models | es |
dc.type | journal article | es |
dc.journal.title | International Journal of Data Science and Analytics | es |
dc.type.hasVersion | AO | es |
dc.rights.accessRights | embargoed access | es |
dc.identifier.doi | 10.1007/s41060-024-00699-3 | es |
dc.description.refereed | TRUE | es |
dc.description.status | pub | es |