METHOD OF IMPROVING THE QUALITY OF TEXT GENERATION BY REPEATED PROCESSING OF THE GENERATED TEXT BY THE MODEL

Authors

DOI:

https://doi.org/10.31891/2307-5732-2024-331-39

Keywords:

gpt-4, alignment, text generation, natural language processing, natural language inference

Abstract

The growing popularity of large language models has emphasized the need to align them with the needs of the user. The alignment task is one of the most important subtasks of artificial intelligence security. Some artificial intelligence researchers claim that in the future this problem will be even more urgent, due to the fact that systems will be more powerful and, in turn, will be better able to find workarounds to achieve the tasks set before them. Currently, these problems arise in commercial products related to large language models, recommender systems, autonomous vehicles, etc.

The task of aligning artificial intelligence systems is to steer the systems to the goals, preferences, and ethical principles of a person. A system is considered aligned if it achieves the intended goals, and misaligned if it pursues certain goals that were not planned. The problem of alignment lies in the difficulty of describing the universal desired behavior, which is why developers of such systems often describe intermediate simplified goals. An example can be receiving feedback from a person. But such an approach can create loopholes and reward the system for imitating the desired behavior. Systems can learn to achieve intermediate goals without achieving the desired final goals. Such misaligned systems can cause harm in real-world use.

The paper proposes a method for improving the quality of text generation by large language models using the GPT-4 model as an example. An iterative method is proposed for matching the generated text with the user's request by retraining the model on examples on which it makes mistakes. Retraining occurs automatically with the transfer to the input of the model of examples in which an error was made for retraining.

Compared to the original base model, the proposed method shows significant improvements, increasing the accuracy from 82.5 to 90. The proposed method during experiments showed promise for practical application in real text generation tasks.

Published

2024-02-29

How to Cite

ZDEBSKYI, P., & BERKO, A. (2024). METHOD OF IMPROVING THE QUALITY OF TEXT GENERATION BY REPEATED PROCESSING OF THE GENERATED TEXT BY THE MODEL. Herald of Khmelnytskyi National University. Technical Sciences, 331(1), 259-263. https://doi.org/10.31891/2307-5732-2024-331-39