A REVIEW OF MACHINE LEARNING INFERENCE FRAMEWORKS FOR іOS-BASED MOBILE APPLICATIONS
DOI:
https://doi.org/10.31891/2307-5732-2025-355-94Keywords:
machine learning, Mobile Inference, iOS, ML frameworksAbstract
This paper provides a comprehensive review of current frameworks for machine learning (ML) inference on the iOS platform. With the growing computational capabilities of mobile devices, on-device ML inference has become a viable and increasingly popular approach, reducing reliance on server-side processing. The review identifies and evaluates key ML frameworks suitable for mobile deployment, including Core ML, TensorFlow Lite (recently rebranded as LiteRT), ONNX Runtime, PyTorch Mobile, and ExecuTorch. The study outlines the historical evolution of ML tools, from early frameworks such as Theano and Caffe to modern, production-ready solutions.
The analysis focuses on core differences in model format support, integration complexity, hardware acceleration capabilities, and compatibility with the iOS ecosystem. Core ML, as Apple’s native framework, is optimized for seamless integration and hardware performance, while TensorFlow Lite and ONNX Runtime offer cross-platform potential and flexibility. PyTorch-based solutions provide dynamic graph execution and are favored by the research community, especially after the introduction of ExecuTorch, which enhances mobile efficiency and hardware support.
The article also discusses the decline of legacy frameworks and the consolidation around more versatile and actively maintained tools. The strengths and limitations of each framework are compared based on use-case suitability, ease of integration, and optimization support.
This work aims to guide mobile developers in making informed decisions when selecting an ML inference framework for iOS apps. It highlights how different tools meet the practical demands of mobile AI applications and stresses the importance of considering factors such as platform constraints, energy efficiency, and maintainability. Future research directions include benchmarking model performance across frameworks and evaluating inference speed and resource usage on real-world iOS devices.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 МИКОЛА ФАРІОН (Автор)

This work is licensed under a Creative Commons Attribution 4.0 International License.