Research on automotive AI operating system (AIOS): from AI application and AI-driven to AI-native
Automotive Operating System and AIOS Integration Research Report, 2025, released by ResearchInChina, explains the status quo and trends of AI application in automotive operating systems (OS), and analyzes how vehicle OS and AIOS mutually empower and co-evolve.
The relationship between vehicle OS and AIOS:
From 2023 to 2024, with the rise of central computing architecture, domain operating systems started evolving towards vehicle OS which takes on integrating the full-domain software system.
In the second half of 2024, AI foundation models started being mass-produced and introduced into vehicles, which raises new requirements for vehicle operating systems and also enables their scheduling capabilities, further facilitating the adoption of automotive AIOS.
AIOS is an AI-driven operating system that enables operating systems with ""intelligence"", that is, allow the systems to independently make optimizations and decisions during task execution and scheduling. AIOS represents the pinnacle of vehicle intelligence, and is responsible for handling complex perceptual data, executing intelligent decision, and realizing human-like interaction, while vehicle OS serves as the software foundation for all vehicle functions. The deep integration of the two is not merely a functional overlay but a key force driving reshaping of underlying architecture, deep synergy in industry chain, and redefinition of competitive rules.
1.Vehicle OS supports the implementation of AI capabilities: Beyond providing computing power and data, the SOA of vehicle OS abstracts vehicle functions into independent services through standardized interfaces, achieving hardware-software decoupling, and makes it easy to call interfaces across different software modules through atomic services, providing a stable and flexible invocation environment for AI models. Take Geely as an example:
Geely’s customized OS, GOS, is based on an SOA development framework that encapsulates various vehicle functions as services and allows AI functions to quickly call these services for agile development and iteration, providing the foundation for the rapid deployment and continuous optimization of AI capabilities. In early 2025, Geely introduced its ""Full-Domain AI"" system, and upgraded its OS to AIOS, with a model layer set up for AIOS to call.
2.AI reconstructs vehicle OS: Shifting it from the traditional ""function-driven"" model to a smarter ""intent-driven"" model:
AI Agents at the application layer can leverage foundation models' semantic analysis capabilities to accurately understand users' natural language commands and even latent intentions, and automatically invoke underlying software modules to complete tasks. The ""intent-driven"" interaction model is used to enable vehicles to proactively understand needs and provide services, making user experience much more natural and convenient.
Foundation models at the middleware (or model) layer not only provide calling interfaces for agents but also optimize the scheduling capabilities of vehicle OS through planning. This process relies on historical data and real-time system states, and uses reinforcement learning and operations research algorithms to dynamically allocate system resources and prioritize tasks. For instance, when a user simultaneously initiates navigation planning and high-definition video playback, foundation models can predict the urgency of route calculation and the resource demands of video decoding, coordinate CPU, GPU, and NPU compute in advance to ensure both navigation response and smooth video playback, avoiding stuttering caused by resource contention in traditional scheduling algorithms.
Data at the resource layer serves as the bridge between the two. Vehicle OS is responsible for data collection and management, while AIOS handles data analysis and decision-making.
In ArcherMind’s case, its subsidiary Arraymo developed ArraymoAIOS 1.0, an on-device AI operating system which, together with the vehicle operating system FusionOS 2.0, constitutes the technical base of AIOS. Key features of this base include:
Support use of Qualcomm SA8775P to build cockpit agents, and NVIDIA Orin to build vehicle agents, each equipped with 10+ deeply optimized on-device models (DeepSeek, Llama, Baichuan, Gemma, Yi-Chat, etc.).
Introduce intelligent scheduling algorithms to monitor and analyze multi-modal task loads (text, image, audio, etc.) in real time, and dynamically adjust the strategies for allocation of resources like CPU, GPU, and memory.
Introduce the AI acceleration engine AMLightning to efficiently schedule computing units in AI chips, allowing reasoning tasks to run on the most suitable computing unit.
Evolution of AIOS: From AI Application and AI-Driven to AI-Native
In the automotive sector, AI was initially integrated at the application layer of the operating system, invoked via interfaces for specific scenarios. Entering the era of AIOS, AI starts penetrating deeper into the underlying layer, from being integrated into the middleware layer for driving functions, to touching the OS kernel and underlying architecture. In the future, it will evolve into AI-native OS.
As of April 2025, there have been three modes of AI integration in OS, corresponding to the three development phases of AIOS:
AI Application Phase: introduced as applications to serve scenarios.
AI-Driven Phase: connected at the middleware layer, utilizing components like AI Runtime and AI frameworks (models/agents/algorithm frameworks) to drive various software functions more flexibly.
AI-Native Phase: large language models (LLMs) are called as microkernel modular components, providing platform-level AI capabilities for the entire OS.
Huawei believes that the application of AI technology in terminal products typically passes through three phases: AI integration at the application layer, AI fusion at the system layer, and AI-centric new OS.
As of H1 2025, most OEMs have already deployed AI at the application layer and have begun to integrate AI components into the middleware layer. Examples include Li Auto’s Halo OS, NIO’s Sky OS, Xiaomi’s Hyper OS, and Geely’s AIOS GOS.
AI Application Phase
At this phase, AI is integrated into the application layer of OS to be called for scenarios. OS primarily provides computing power and data interfaces to optimize and upgrade basic AI functions like navigation and voice interaction. For example, in a ""vehicle assistant"" scenario, when a user calls AI for car-related knowledge, AI at the application layer first analyzes the request, converts it into a command, retrieves relevant data from databases, and formulates a natural-language answer displayed on the center console screen.
AI-Driven Phase
At this phase, AIOS extends into the middleware layer, becoming a mainstream approach for AI Agent invocation in intelligent cockpits. Upper-layer agents leverage AI components to directly call SOA atomic services via framework modules to control vehicle functions or other software features. Additionally, toolchains can be used to call multiple external tools and ecological interfaces to achieve ""touchless"" automation for scenarios.
For instance, the ""people search by photographing"" function of Li Auto’s MindVLA requires MindVLA to successively complete such steps as object recognition, map data matching, and route planning, involving use of components like AI reasoning framework and reasoning acceleration, and invocation of external maps and location data.
Li Auto’s Halo OS incorporates an AI subsystem in the middleware layer, which includes not only AI Runtime but also components like AI reasoning engine and reasoning acceleration framework.
AI-Native Phase
AI-Native refers to systems or product forms that are fundamentally driven by AI, and deeply integrate AI in design from the ground up.
An AI-Native OS is an operating system that deeply integrates AI into its underlying architecture from the beginning of design, features system-level AI capabilities, and delivers all-scenario intelligent experience and rich agent ecosystems.
When AI and OS achieve deep integration, an AI-Native OS is formed. The system can intelligently optimize resource allocation and task scheduling according to application scenarios and demands, thus bringing a qualitative leap in overall efficiency and intelligence, rather than merely taking AI as an upper-layer application or functional module.
In Huawei’s case, its AI-Native OS has the following features:
Unified AI system base
AI-Native applications
Xiaoyi Super Agent
Open ecosystems
Underpinned by the AI system base, super apps/agents are built and rich ecosystems are created. AI-native HarmonyOS features multimodal understanding, personalized user data understanding, and privacy protection capabilities, and all-scenario perception and collaboration capabilities.
In April 2025, Huawei launched HarmonySpace 5, a HarmonyOS-based cockpit which adopts the MoLA hybrid foundation model architecture. It leverages a multi-model base (including DeepSeek), led by the PanGu Models, to enable system agent and vertical agent scenario applications. The entire upper-layer applications are supported by the system-level AI capabilities of HarmonyOS 5.0.
In ThunderSoft’s case, in 2025, AquaDrive OS has been upgraded to an AI-native OS, offering optimizations in the following directions:
The AI middleware of AquaDrive OS includes agent perception/execution services and an agent management framework to support multi-agent interaction. It also incorporates a foundation model inference and scheduling framework, supporting connection to various cloud and on-device foundation models to achieve life-oriented multimodal recognition and environmental guidance.
Its framework provides SOA services, and enables modular software function calls with atomized support.
Learn how to effectively navigate the market research process to help guide your organization on the journey to success.
Download eBook