METHODS OF IMPROVING SOFTWARE TESTING EFFICIENCY WITH LLM: INTRODUCING LLMTESTER
DOI:
https://doi.org/10.31891/Keywords:
automated testing, large language models, test generation, semantic coverage, API analysis, LLMTesterAbstract
Modern automated test generation tools achieve high code coverage but largely ignore the semantic aspects of software. Large Language Models (LLMs) open new horizons in testing, particularly in creating meaningful, logically justified tests, conducting deep API documentation analysis, and detecting complex logical defects. This paper introduces the LLMTester method, which combines the intelligent capabilities of LLMs with classical testing approaches. The method involves automatic generation of unit tests and functional scenarios, evaluation of their semantic coverage as a complement to traditional metrics, and automated failure analysis. Experimental evaluation results based on the open-source web application Prestashop demonstrate a significant improvement in testing quality, reduction in test creation time, and increased defect detection efficiency compared to traditional approaches. Our work highlights the potential of LLMs not only for automation but also for intelligent enhancement of the software quality assurance process, particularly through the introduction of a new semantic coverage metric.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 ВОЛОДИМИР МАКОВИШИН (Автор)

This work is licensed under a Creative Commons Attribution 4.0 International License.