ALGORITHMIC ACCOUNTABILITY: TECHNICAL MODELS OF EXPLAINABLE ARTIFICIAL INTELLIGENCE AND LEGAL CHALLENGES OF THEIR IMPLEMENTATION
DOI:
https://doi.org/10.31891/2307-5732-2026-363-11Keywords:
artificial intelligence, explainable artificial intelligence, algorithmic accountability, algorithmic transparency, XAI, legal responsibility, AI ethics, EU AI Act, ISO/IEC 42001:2023, model audit, AI risk management, legal regulation of AIAbstract
The article examines the issue of algorithmic accountability in the context of the growing role of Artificial Intelligence (AI) and the urgent need for transparency in decision-making processes. The proliferation of automated systems in public administration, finance, justice, and healthcare highlights the importance of explainability in AI-based decisions, especially in cases where such decisions directly affect human rights, freedoms, and legal responsibilities. The research focuses on technical models of Explainable Artificial Intelligence (XAI), including SHAP, LIME, and Grad-CAM, which enable the interpretation of complex deep learning algorithms and increase trust in AI-driven systems.
International regulatory initiatives such as the EU AI Act, OECD AI Principles, and ISO/IEC 42001:2023 are analyzed alongside Ukrainian approaches to AI governance, with particular attention to issues of compliance, risk management, and accountability mechanisms. The study reveals the interdependence between technical transparency and legal accountability for algorithmic bias, errors, and potential harm caused by automated decision-making systems. Special emphasis is placed on ethical challenges related to fairness, non-discrimination, and human oversight in high-risk AI applications.
It is concluded that the effective implementation of XAI requires synergy between technical standards, ethical principles, and legal regulation. A promising direction for further research involves the development of a national methodology for assessing algorithmic responsibility, the establishment of standardized audit procedures for AI systems, and their integration into Ukrainian legislation on digital ethics and artificial intelligence. The results of the study may be useful for researchers, policymakers, legal experts, and developers involved in the design and regulation of trustworthy AI systems.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 ТАРАС СТИСЛО (Автор)

This work is licensed under a Creative Commons Attribution 4.0 International License.