METHODS OF IMPROVING SOFTWARE TESTING EFFICIENCY WITH LLM: INTRODUCING LLMTESTER

Authors

DOI:

https://doi.org/10.31891/

Keywords:

automated testing, large language models, test generation, semantic coverage, API analysis, LLMTester

Abstract

Modern automated test generation tools achieve high code coverage but largely ignore the semantic aspects of software. Large Language Models (LLMs) open new horizons in testing, particularly in creating meaningful, logically justified tests, conducting deep API documentation analysis, and detecting complex logical defects. This paper introduces the LLMTester method, which combines the intelligent capabilities of LLMs with classical testing approaches. The method involves automatic generation of unit tests and functional scenarios, evaluation of their semantic coverage as a complement to traditional metrics, and automated failure analysis. Experimental evaluation results based on the open-source web application Prestashop demonstrate a significant improvement in testing quality, reduction in test creation time, and increased defect detection efficiency compared to traditional approaches. Our work highlights the potential of LLMs not only for automation but also for intelligent enhancement of the software quality assurance process, particularly through the introduction of a new semantic coverage metric.

Published

2025-12-11

How to Cite

MAKOVYSHYN, V. (2025). METHODS OF IMPROVING SOFTWARE TESTING EFFICIENCY WITH LLM: INTRODUCING LLMTESTER. Herald of Khmelnytskyi National University. Technical Sciences, 359(6.1), 329-333. https://doi.org/10.31891/