DETECTING GENERATIVE AI HALLUCINATIONS IN ANALYTIC DATASETS OF SOFTWARE SYSTEMS

Authors

DOI:

https://doi.org/10.31891/2307-5732-2024-343-6-38

Keywords:

Generative AI, AI hallucinations, data analytics, software systems, error detection

Abstract

Generative AI in software system analytics included progress and a threat, hallucinations, as a challenge. Such outputs, though plausible, do not bear reality to a significant extent and pose a challenge in relying on AI-generated insights. These models, which provide seamless intelligence and automation, are often deep learning-based and need more common sense and contextual awareness possessed by humans.

These models are developed on extensive data and can spot patterns but are prone to misfitting spurious relations, leading to erroneous perspectives. Numerous reasons are responsible for these hallucinations. Their models can easily be steered incorrectly due to a biased or incomplete training input. If too much focus is applied to the training inputs, generalization becomes a problem, and neuroscientists suggest more new ideas increase the risk of hallucinations. Culprits of the above challenges are also concepts of neural architecture – our AI models – that try to emulate how the human brain works with maths instead of truly understanding the problem.

                Dealing with this requires a clear strategy. First and foremost, data is the core of any AI application, and at this stage, data cleansing, bias detection, and representativeness are powerful tools. It's also important to select an appropriate model architecture during the model's training.

Although detection and mitigation are not the same, they are equally important. One of the anomaly detection algorithms could flag unusual outputs, while illogical conclusions could be avoided by employing certain domain-specific rules as a `sanity check.' AI ensemble models have variety, so risks are reduced. Human intervention is still necessary, but AI insights may be further verified by domain specialists, and minor bugs could be detected.

                A potentially interesting direction is transitioning from conventional LLMs to RAG. RAG models rely on external content, so their conclusions are based on factual information and are less likely to make things up. "Self-RAG" proposes a strategy that goes one step further—models would be able to verify themselves by looking up external content.

 

 

Downloads

Published

2024-12-16

How to Cite

SAVELIEV, R., & DENDIUK, M. (2024). DETECTING GENERATIVE AI HALLUCINATIONS IN ANALYTIC DATASETS OF SOFTWARE SYSTEMS. Herald of Khmelnytskyi National University. Technical Sciences, 343(6(1), 257-261. https://doi.org/10.31891/2307-5732-2024-343-6-38