TRAINING GENERATIVE ADVERSARIAL MODELS FOR IMAGE TRANSLATION UNDER CONDITIONS OF LIMITED DATA

Authors

DOI:

https://doi.org/10.31891/2307-5732-2023-329-6-204-207

Keywords:

Generative Adversarial Networks, data augmentation, image translation

Abstract

In our rapidly evolving modern world, characterized by the swift progress of information technology and the widespread integration of artificial intelligence across diverse domains, the potential of artificial intelligence shines prominently, particularly in the realms of machine and deep learning. Within this context, Generative Adversarial Networks (GANs) emerge as formidable tools, adept at producing realistic images and facilitating seamless translations across different languages and modalities. Notably, the CycleGAN model distinguishes itself as a formidable instrument for image translation between disparate domains, dispensing with the necessity of paired training data. Its exceptional precision in metamorphosing images from one category to another underscores its ability to maintain cyclic consistency across domains. Nevertheless, a principal challenge looms in the application of GANs—namely, the imperative of a substantial volume of data for effective training. When these generative adversarial models, exemplified by CycleGAN, are trained on limited datasets, they may incline toward excessive generalization. This tendency implies that the model might overfit to the restricted training samples, thereby compromising its adaptability to diverse input data. Consequently, the model might lack the requisite diversity to faithfully reproduce various facets of the source images, resulting in a loss of detail and the emergence of visual artifacts in the generated images. In scenarios where data availability is constrained, the imperative of devising efficient training methodologies for generative adversarial models becomes all the more salient. The present work is dedicated to the comprehensive exploration and development of such training methodologies, with a specific focus on addressing the intricate challenges of image translation tasks within the confines of limited available data. Through innovative strategies, we aim to enhance the robustness and generalization capacity of GANs, facilitating their effective application in real-world scenarios characterized by data scarcity.

Published

2023-12-31

How to Cite

KRYVENCHUK, Y., & CHABAN, S. (2023). TRAINING GENERATIVE ADVERSARIAL MODELS FOR IMAGE TRANSLATION UNDER CONDITIONS OF LIMITED DATA. Herald of Khmelnytskyi National University. Technical Sciences, 329(6), 204-207. https://doi.org/10.31891/2307-5732-2023-329-6-204-207