デフォルト表紙
市場調査レポート
商品コード
1583748

自動運転データクローズドループ(2024年)

Autonomous Driving Data Closed Loop Research Report, 2024


出版日
ページ情報
英文 323 Pages
納期
即日から翌営業日
価格
価格表記: USDを日本円(税抜)に換算
本日の銀行送金レート: 1USD=144.08円
自動運転データクローズドループ(2024年)
出版日: 2024年09月03日
発行: ResearchInChina
ページ情報: 英文 323 Pages
納期: 即日から翌営業日
GIIご利用のメリット
  • 全表示
  • 概要
  • 目次
概要

ソフトウェア2.0とエンドツーエンド技術が自動運転に搭載されるにつれ、インテリジェントドライビングの開発モデルはルールベースのサブタスクモジュールからデータ駆動段階のAI 2.0へと進化し、汎用人工知能(AGI)すなわちAI 3.0に向けて徐々に発展しています。

SenseAutoはAuto China 2024で、次世代の自動運転技術であるDriveAGIのプレビューを公開しました。DriveAGIは、エンドツーエンドのインテリジェントドライビングソリューションの改良とアップグレードに向けた大規模なマルチモーダルモデルに基づいています。DriveAGIは、自動運転基盤モデルをデータ駆動型から認知駆動型に進化させるもので、ドライバーの概念を超え、世界の理解を深め、より高い推論、判断、対話能力を誇ります。自動運転では、現在、人間の思考パターンにもっとも近く、人間の意図をもっとも理解でき、困難な運転シナリオに対処する能力がもっとも高い技術的ソリューションです。

データクローズドループは、AI 1.0以降の自動運転の研究開発に不可欠なものですが、自動運転におけるAI応用の段階が異なれば、データクローズドループの各リンクに対する要求も大きく異なります。

インテリジェントドライビングシステムのフルスタックモデル開発は、データクローズドループにどのような変化をもたらすでしょうか。

1. 収集車両によるデータ収集モードは、大規模な収集から、量産車両によるロングテールシナリオ収集に移行し、高品質なデータが重視されるようになっています。

データの流れから見ると、現在、インテリジェントドライビングデータの収集方法には、特別な収集車両による収集、量産車両によるデータ収集とバックホール、路上でのデータの収集と統合、低空でのドローンによる交通データ収集、模擬合成データなど多くの方法があり、最大カバー率、もっとも一般化されたシナリオ、もっとも完全なデータタイプを達成し、最終的にデータの3要素である数量、完全性、精度を満たすことを目指しています。ここでは、量産車両によるデータ収集が主流です。

OEMは量産車による大量のインテリジェントドライビングデータを蓄積し続け、AIアルゴリズムを訓練するための効果的で高品質なデータを抽出しています。例えば、Li Autoは80万人以上の自動車所有者の運転行動を採点しており、そのうち約3%は90点以上で"ベテランドライバー"と呼ばれます。フリートの経験豊富なドライバーの運転データは、エンドツーエンドのモデルを訓練するための燃料となります。2024年末までに、Li Autoのエンドツーエンドモデルは500万km超を学習すると推定されています。

では、十分なデータがあれば、どのようにして効果的なシーンデータを完全に抽出し、より質の高いトレーニングデータを採掘できるのでしょうか。以下の例から知ることができます。

データ圧縮という点では、車両が収集するデータは、車両システムや各種センサーの環境認識データに由来することが多いです。分析やモデルのトレーニングに使用する前に、データの品質と一貫性を確保するために、データを厳密に前処理し、クリーニングする必要があります。車両データはさまざまなセンサーやデバイスから取得され、各デバイスは固有のデータ形式を持っています。RAWフォーマット(すなわち、ISPアルゴリズムで処理されていない生のカメラデータ)で保存された高解像度のインテリジェントドライビングシーンデータは、今後、高品質なシーンデータの動向になると見られます。Vcarsystemの場合、その「カメラベースのRAWデータ圧縮・収集ソリューション」は、データ収集の効率を向上させるだけでなく、生データの完全性を最大化し、その後のデータ処理と分析に信頼できる基盤を提供します。従来のISP後の圧縮データ再生と比較して、RAW圧縮データ再生はISP処理プロセスでの情報損失を回避し、生画像データをより正確に復元することができ、アルゴリズムトレーニングの精度とインテリジェントドライビングシステムの性能を向上させます。

データマイニングに関しては、オフライン3D点群基礎モデルに基づくデータマイニング事例が注目に値します。例えば、QCraftはオフライン点群基盤モデルに基づいて、高品質な3Dデータをマイニングし、物体認識能力を継続的に向上させることができます。それだけでなく、QCraftはテキストツーイメージに基づく革新的なマルチモーダルモデルも構築しています。自然言語によるテキスト記述だけで、このモデルは対応するシーン画像を監視なしで自動的に取得し、通常のデータ利用では見つけにくく、生活では遭遇しにくいロングテールシーンを数多く採掘することができ、ロングテールシーンのマイニング効率を向上させます。例えば、「夜の雨の中を走行する大型トラック」や「道端に倒れている人」などのテキスト記述が入力されると、対応するシーンを自動的にフィードバックすることができ、特別な解析や学習を支援します。

当レポートでは、中国の自動車産業について調査分析し、自動運転データクローズドループの開発に関する情報を提供しています。

目次

第1章 自動運転データクローズドループの概要

  • データクローズドループの進化
  • 自動運転データクローズドループ構築の難しさ
  • ソリューションケース1
  • ソリューションケース2
  • 自動運転データクローズドループの産業チェーンマップ
  • データクローズドループの基盤:データセキュリティ

第2章 データ収集

  • 多様なインテリジェントドライビングデータ収集方式のサマリー
  • 代表的なデータ収集/データ圧縮ソリューション

第3章 データアノテーション

  • サマリー:インテリジェントデータアノテーションプラットフォームの比較(1)
  • サマリー:インテリジェントデータアノテーションプラットフォームの比較(2)
  • Haitian Ruisheng
  • MindFlow
  • DataBaker Technology
  • Molar Intelligence
  • Magic Data
  • Jinglianwen Technology
  • Appen
  • Scale AI

第4章 データ処理

  • 自動運転データクローズドループ処理プロセス
  • 自動運転データの分類と等級付け
  • データコンプライアンス
  • データトランスミッション
  • インテリジェントコンピューティングセンター
  • データクローズドループクラウドプラットフォーム

第5章 データクローズドループ技術サプライヤー

  • サマリー:データクローズドループ技術サプライヤーの比較(1)
  • サマリー:データクローズドループ技術サプライヤーの比較(2)
  • サマリー:データクローズドループ技術サプライヤーの比較(3)
  • サマリー:データクローズドループ技術サプライヤーの比較(4)
  • サマリー:データクローズドループ技術サプライヤーの比較(5)
  • JueFX Technology
  • QCraft
  • Zhuoyu
  • Haomo.ai
  • SenseAuto
  • Momenta
  • Freetech
  • Nullmax
  • DeepRoute.ai
  • Bosch
  • EXCEEDDATA
  • Yoocar
  • Mxnavi
  • NavInfo

第6章 代表的なOEMのデータクローズドループ

  • サマリー:OEMのデータクローズドループ機能(1)
  • サマリー:OEMのデータクローズドループ機能(2)
  • BYD
  • Chery
  • Great Wall Motor
  • Geely
  • Li Auto
  • Xpeng
  • NIO

第7章 データクローズドループ開発の動向

目次
Product Code: FZQ015

Data closed loop research: as intelligent driving evolves from data-driven to cognition-driven, what changes are needed for data loop?

As software 2.0 and end-to-end technology are introduced into autonomous driving, the intelligent driving development model has evolved from the rule-based sub-task module to the data-driven stage AI 2.0, and is gradually developing towards artificial general intelligence (AGI), namely, AI 3.0.

At the Auto China 2024, SenseAuto showcased its next-generation autonomous driving technology: preview of DriveAGI, which is based on large multimodal models for improvement and upgrade of end-to-end intelligent driving solutions. DriveAGI is the evolution of autonomous driving foundation models from data-driven to cognition-driven, beyond the concept of driver, deepening understanding of the world, and boasting greater reasoning, decision and interaction capabilities. In autonomous driving, it is currently the technical solution that is closest to human thinking patterns, can understand human intentions best, and has the strongest ability to cope with difficult driving scenarios.

Data closed loop is indispensable to autonomous driving R&D after AI 1.0, but at different stages of AI application in autonomous driving, the requirements for each link of data closed loop vary greatly.

What changes will the full-stack model development of intelligent driving systems bring to the data closed loop?

1. The data collection mode has shifted from large-scale collection by collection vehicles to long-tail scenario collection by production vehicles, with more emphasis on high-quality data.

From the perspective of data flow, there are currently many ways to collect intelligent driving data, including collection by special collection vehicles, data collection and backhaul by production vehicles, roadside data collection and fusion, traffic data collection by drones at low altitudes, and simulated synthetic data, in a bid to achieve the maximum coverage, the most generalized scenarios, and the most complete data types, and ultimately fulfill the three elements of data: mass, completeness, and accuracy. Wherein, data collection by production vehicles is the mainstream mode.

As can be seen from the above table, OEMs keep accumulating massive amounts of intelligent driving data with production vehicles, and extracting effective and high-quality data to train AI algorithms. For example, Li Auto has scored the driving behaviors of more than 800,000 car owners, about 3% of which are scored above 90 and can be called "experienced drivers." The driving data of the experienced drivers of fleets is the fuel for training end-to-end models. It is estimated that by the end of 2024, Li Auto's end-to-end model is expected to learn over 5 million kilometers.

So, with sufficient enough data, how can we fully extract effective scene data and mine higher-quality training data? You can get to know from the following examples:

In terms of data compression, the data collected by vehicles often comes from the environmental perception data of vehicle systems and various sensors. Before being used for analysis or model training, the data must be preprocessed and cleaned strictly to ensure its quality and consistency. The vehicle data may come from different sensors and devices, and each device may have its own specific data format. High-definition intelligent driving scene data stored in RAW format (i.e., raw camera data that has not been processed by the ISP algorithm) will become a trend of high-quality scene data in the future. In Vcarsystem's case, its "camera-based RAW data compression and collection solution" not only improves the efficiency of data collection, but also maximizes the integrity of the raw data, providing a reliable foundation for subsequent data processing and analysis. Compared with the traditional ISP post- compressed data replay, RAW compressed data replay avoids the information loss in the ISP processing process, and can restore the raw image data more accurately, improving the accuracy of algorithm training and the performance of the intelligent driving system.

As for data mining, data mining cases based on offline 3D point cloud foundation models deserve attention. For example, based on offline point cloud foundation models, QCraft can mine high-quality 3D data and continuously improve object recognition capabilities. Not only that, QCraft has also built an innovative multimodal model based on text to image. Just with natural language text descriptions, the model can automatically retrieve corresponding scene images without supervision and mine many long-tail scenes that are difficult to find in ordinary data use and hard to encounter in life, thereby improving the efficiency of mining long-tail scenes. For example, as text descriptions such as "a large truck traveling in the rain at night" and "people lying at the roadside" are inputted, the system can automatically give a feedback on the corresponding scene, favoring special analysis and training.

2. Data labeling is heading in the direction of AI-automated high-precision labeling, and will tend to be used less or no longer needed in the future.

As foundation models find broad application and deep learning technology advances, the demand for data labeling makes explosive growth. The performance of foundation models depends heavily on the quality of input data. So the requirements for the accuracy, consistency, and reliability of data labeling become increasingly higher. To meet the high demand for data labeling, many data labeling companies have begun to develop automatic labeling functions to further improve data labeling efficiency. Examples include:

Based on the automation capabilities of foundation models, DataBaker Technology has launched 4D-BEV, a new labeling tool which supports the processing of hundreds of millions of pixel point clouds. It helps to quickly and accurately perceive and understand the surroundings of the vehicle, and combines static and dynamic perception tasks for multi-perspective, multi-sequential labeling of objects such as vehicles, pedestrians and road signs, providing more accurate information like object location, speed, posture and behavior. It can also provide interactive information of different objects in the scene, helping the autonomous driving system to better understand the traffic conditions on the road, so as to make more accurate decisions and control. To improve the efficiency and accuracy of labeling, DataBaker Technology adds machine vision algorithms to 4D-BEV to automatically complete complex labeling work, enabling high-quality recognition of lane lines, curbs, stop lines, etc.

MindFlow's SEED data labeling platform supports all types of 2D, 3D, and 4D labeling in autonomous driving and other scenarios, including 2/3D fusion, 3D point cloud segmentation, point cloud sequential frame overlay, BEV, 4D point cloud lane lines and 4D point cloud segmentation, and covers all labeling sub-scenarios of autonomous driving. In addition, its AI algorithm labeling model incorporates AI intelligent segmentation based on the SAM segmentation model, static road adaptive segmentation, dynamic obstacle AI preprocessing, and AI interactive labeling. It improves the average efficiency of data labeling in typical autonomous driving scenarios by more than 4-5 times, and by more than 10-20 times in some scenarios. In addition, MindFlow's data labeling foundation model is based on weak supervision and semi-supervised learning, and uses a small amount of manually labeled data and a mass of unlabeled data for efficient detection, segmentation, and recognition of scene objects.

Additionally, on July 27, 2024, NIO officially announced NWM (NIO World Model), China's first intelligent driving world model. As a multivariate autoregressive generative model, it can fully understand information, generate new scenes, and predict what may happen in the future. It is worth noting that as a generative model, NWM can use a 3-second driving video as Prompt to generate a 120-second video. Through the self-supervision process, NWM can need no data labeling and becomes more efficient.

3. Simulation testing is becoming increasingly important in development of intelligent driving. High accuracy and high restoration capabilities are the key to improving the quality of scene coverage.

High-level intelligent driving needs to be tested in various complex and diverse scenarios, which requires not only high precision sensor perception and restoration capabilities, but also powerful 3D scene reconstruction capabilities and scene coverage generalization capabilities.

PilotD Automotive's full physical-level sensor model can simulate detailed physical phenomena, for example, multi-path reflection, refraction, interference and multi-path reflection of electromagnetic waves, or dynamic sensor performance such as detection loss rate, object resolution and measurement inaccuracy, and "ghost" physical phenomena, so as to obtain high fidelity required by the sensor model. The full physical-level sensor model based on PilotD Automotive's PlenRay physical ray technology currently boasts a simulation restoration rate of over 95%.

dSPACE's AURELION (high-precision simulation of 3D scenes and physical sensors) is a flexible sensor simulation and visualization software solution. Based on physical rendering by a game engine, it simulates pixel-level raw data of camera sensors. AURELION's radar module uses ray tracing technology to simulate the signal-level raw data of ray-type sensors. Considering the impacts of specific materials on LiDAR, the output point cloud contains reflectivity values close to real calculations. For each ray, it provides realistic motion distortion effects and configurable time offset values.

RisenLighten's Qianxing Simulation Platform adds rich and realistic pedestrian models, and supports customization of micro trajectories of pedestrians and batch generation of pedestrians. Moreover, the platform also provides different high-fidelity pedestrian behavior style models, covering such scenarios as human-vehicle interaction, crossing, and diagonal crossing at intersections. It models three types of drivers (conservative, conventional and aggressive), and refines parameters by probability distribution, so as to diversify and randomize driving behaviors of vehicles in the environment.

As a generative simulation model, NIO NSim can compare each trajectory deduced by NWM with the corresponding simulation results. Originally they could only be compared with the only trajectory in the real world. Yet adding NSim enables joint verification in tens of millions of worlds, providing more data for NWM training. This makes the output intelligent driving trajectory and experience safer, more reasonable, and more efficient.

In the field of autonomous driving, end-to-end solutions have a more urgent need of high-fidelity scenes. For the end-to-end system needs to cope with various complex scenarios, a lot of videos labeled with autonomous driving behaviors need to be put into autonomous driving training. With regard to 3D scene reconstruction, currently penetration and application of 3D Gaussian Splattering (3DGS) technology in the automotive industry accelerate. This is because 3DGS performs well in rendering speed, image quality, positioning accuracy, etc., fully making up for the shortcomings of NeRF. Meanwhile the reconstructed scene based on 3DGS can replicate the edge scenes (Corner Case) found in real intelligent driving. By dynamic scene generalization, it improves the ability of the end-to-end intelligent driving system to cope with corner cases. Examples include:

51Sim innovatively integrates 3DGS into traditional graphics rendering engines through AI algorithms, making breakthroughs in realism. 51Sim fusion solution has high-quality and real-time rendering capabilities. The high-fidelity simulation scene not only improves the training quality for the autonomous driving system, but also significantly improves the authenticity of simulation, making it almost indistinguishable to naked eyes, greatly improving the confidence of simulation, and making up for shortfalls of 3DGS in details and generalization capabilities.

In addition, Li Auto also uses 3DGS for simulation scene reconstruction. Li Auto's intelligent driving solution consists of three systems, namely, end-to-end (fast system) + VLM (slow system) + world model. Wherein, the world model combines two technology paths: reconstruction and generation. It uses 3DGS technology to reconstruct the real data, and the generative model to offer new views. In scene reconstruction, the dynamic and static elements are separated, the static environment is reconstructed, and the dynamic objects are reconstructed and a new view is generated. After re-rendering the scene, a 3D physical world is formed, in which the dynamic assets can be edited and adjusted arbitrarily for partial generalization of the scene. The generative model features greater generalization ability, and allows weather, lighting, traffic flow and other conditions to be customized to generate new scenes that conform to real laws, which are used to evaluate the adaptability of the autonomous driving system in various conditions.

In short, the scene constructed by combining reconstruction and generation creates a better virtual environment for learning and testing the capabilities of the autonomous driving system, enabling the system to have efficient closed-loop iteration capabilities and ensuring the safety and reliability of the system.

4. The rapid development of OEMs' full-stack self-development capabilities prompts data closed-loop technology providers to keep improving their service capabilities.

The data closed loop is divided into the perception layer and the planning and control layer, both of which have an independent closed loop process. In both aspects, data closed loop technology providers have the ability to improve their service capabilities, for example:

In terms of perception, in the project development process, the version of the autonomous driving system will be released regularly, integrating and packaging all the contents such as perception, planning and control, communication, and middleware. Some intelligent driving solution providers such as Nullmax will release the perception part separately first, and then test it through automatic tools and testers, output specific reports, and evaluate the fixing of the problems at the early stage. If there are problems with the perception version, there is still time to continue to modify and test it. This can greatly avoid the upstream perception problems from affecting the entire system, and is more conducive to problem location and system improvement, greatly improving the efficiency of system release and project development.

In terms of planning and control, in QCraft's case, its self-developed "joint spatio-temporal planning algorithm" takes into account both space and time to plan the trajectory, and solves the driving path and speed in three dimensions simultaneously, rather than solve the path separately first and then solve the speed based on the path to form the trajectory. Upgrading "horizontal and vertical separation" to "horizontal and vertical combination" means that both path and speed curves will be used as variables in the optimization problem to obtain the optimal combination of the two.

Data closed-loop technology providers generally provide complete data closed-loop solutions or separate data closed-loop products (i.e. modular tool services, e.g., annotation platform, replay tool and simulation tool) for OEMs and Tier1s. OEMs with great data governance capabilities often outsource tool modules that they are not good at, and integrate them into their own data processing platform systems; while OEMs with weak data governance capabilities will consider tightly coupled data closed-loop products or customized services, for example, FUGA, Freetech's new-generation tightly coupled data closed-loop platform product, has gathered more than 8 million kilometers of real mass production data, and experience in algorithm closed-loop iteration of over 100 production models, achieving more than 100-fold algorithm iteration efficiency and managing over 3,000 sets of high-value scene data fragments per month. At present, FUGA has been deployed and applied in production vehicle projects of multiple leading OEMs, supporting daily test data problem analysis, and weekly data cleaning and statistical report analysis.

Table of Contents

1 Overview of Autonomous Driving Data Closed Loop

  • 1.1 Evolution of Data Closed Loop
  • 1.2 Difficulties in Building An Autonomous Driving Data Closed Loop
  • 1.3 Solution Case 1
  • 1.4 Solution Case 2
  • 1.5 Autonomous Driving Data Closed Loop Industry Chain Map
  • 1.6 Foundation of Data Closed Loop: Data Security
    • 1.6.1 Status Quo of Automotive Data Security Standards
    • 1.6.2 Data Security Risks at All Autonomous Driving Levels
    • 1.6.3 Overview of Data Security Governance
    • 1.6.4 Data Security Governance Cases

2 Data Collection

  • 2.1 Summary of Diverse Intelligent Driving Data Collection Modes
    • 2.1.1 Case 1: Production Vehicle
    • 2.1.2 Case 2: Collection Vehicle
    • 2.1.3 Case 3: Drone
    • 2.1.4 Case 4: Roadside Data
    • 2.1.5 Case 5: Simulation Synthesis
  • 2.2 Typical Data Collection/Data Compression Solutions
    • 2.2.1 Case 1: TZTEK Technology
    • 2.2.2 Case 2: Kunyi Electronics
    • 2.2.3 Case 3: EXCEEDDATA

3 Data Annotation

  • Summary: Comparison between Intelligent Data Annotation Platforms (1)
  • Summary: Comparison between Intelligent Data Annotation Platforms (2)
  • 3.1 Haitian Ruisheng
    • 3.1.1 DOTS-AD Data Platform
    • 3.1.2 DOTS-LLM Service Platform
  • 3.2 MindFlow
    • 3.2.1 Autonomous driving AI data annotation solution
    • 3.2.2 SEED Data Service Platform
    • 3.2.3 Data Security Solution
  • 3.3 DataBaker Technology
    • 3.3.1 Autonomous Driving 2D Image Annotation Platform
    • 3.3.2 Autonomous Driving 3D Point Cloud Annotation Platform
    • 3.3.3 Autonomous Driving 4D-BEV Annotation
    • 3.3.4 AI Data Platform
  • 3.4 Molar Intelligence
    • 3.4.1 4D Annotation Tool V2.0
  • 3.5 Magic Data
    • 3.5.1 Annotator Intelligent Annotation Tool
  • 3.6 Jinglianwen Technology
    • 3.6.1 Data Annotation Service
  • 3.7 Appen
    • 3.7.1 MatrixGo(R) High-precision Data Annotation Platform
    • 3.7.2 Foundation Model Intelligent Development Platform
  • 3.8 Scale AI
    • 3.8.1 Annotation and Fine-tuning Services

4 Data Processing

  • 4.1 Autonomous Driving Data Closed-Loop Processing Process
    • 4.1.1 Case 1 of Autonomous Driving Data Closed-Loop Processing Process
    • 4.1.2 Case 2 of Autonomous Driving Data Closed-Loop Processing Process
  • 4.2 Classification and Grading of Autonomous Driving Data
    • 4.2.1 Classification of Autonomous Driving Data
    • 4.2.2 Grading of Autonomous Driving Data
    • 4.2.3 Case: Classification and Grading of Data from Some OEM
  • 4.3 Data Compliance
    • 4.3.1 Overview of Data Compliance
    • 4.3.2 List of Models That Meet Four Compliance Requirements for Automotive Data Security
    • 4.3.3 Data Compliance Solution Case 1
    • 4.3.4 Data Compliance Solution Case 2
  • 4.4 Data Transmission
    • 4.4.1 Case: EMQ
      • 4.4.1.1 EMQ Product Series
      • 4.4.1.2 EMQ Vehicle-Cloud Integrated Data Closed-Loop Platform
      • 4.4.1.3 EMQ vehicle-Cloud Cooperative Data Closed-Loop Application Case: Some OEM & Some Tier1
      • 4.4.1.4 EMQ Vehicle-Cloud Flexible Data Collection Solution
  • 4.5 Intelligent Computing Center
    • 4.5.1 Summary of Autonomous Driving Cloud Supercomputing Centers in China
    • 4.5.2 Intelligent Computing Case 1
    • 4.5.3 Intelligent Computing Case 2
  • 4.6 Data Closed-Loop Cloud Platform
    • 4.6.1 Overview of Cloud Service-Enabled Data Closed-Loop
    • 4.6.2 Case 1: Cloud Data Closed-Loop Tool SimCycle
    • 4.6.3 Case 2: Huawei Cloud-Enabled Data Closed-Loop
    • 4.6.4 Case 3: Jingwei Hirain's Intelligent Driving Data Closed-Loop Cloud Platform OrienLink
    • 4.6.5 Case 4: 51SimOne Cloud-Native Simulation Platform

5 Data Closed-Loop Technology Suppliers

  • Summary: Comparison between Data Closed-Loop Technology Suppliers (1)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (2)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (3)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (4)
  • Summary: Comparison between Data Closed-Loop Technology Suppliers (5)
  • 5.1 JueFX Technology
    • 5.1.1 Data Closed-Loop Solution
    • 5.1.2 Data Closed-Loop Solution (Urban NOA)
    • 5.1.3 Data Closed-Loop Solution (Highway NOA)
    • 5.1.4 BEV+Transformer Algorithm Mass Production Architecture Based on Data Closed-Loop
    • 5.1.5 Multimodal Automatic Annotation and Tool Chain
    • 5.1.6 Automatic Annotation Based on 4D Detection
  • 5.2 QCraft
    • 5.2.1 Data Closed-Loop Capabilities
    • 5.2.2 Joint Spatio-Temporal Planning Technology
    • 5.2.3 Driven-by-QCraft New Mid-to-high-level Intelligent Driving Solution Based on Journey(R) 6
    • 5.2.4 Latest Dynamics
  • 5.3 Zhuoyu
    • 5.3.1 Technology Route
    • 5.3.2 4D Vision-only Automatic Annotation Technology
    • 5.3.3 Intelligent Driving Chip Compute Optimization (1) - Model Optimization
    • 5.3.4 Intelligent Driving Chip Compute Optimization (2) - Computing Acceleration (Heterogeneous Computing)
    • 5.3.5 Intelligent Driving Chip Compute Optimization (2) - Computing Acceleration (Model Reasoning Optimization)
    • 5.3.6 Intelligent Driving Chip Compute Optimization (2) - Computing Acceleration (Operator Optimization)
    • 5.3.7 Intelligent Driving Chip Compute Optimization (3) - System Optimization
  • 5.4 Haomo.ai
    • 5.4.1 Intelligent Driving Data Progress Table
    • 5.4.2 HPilot Series
    • 5.4.3 DriveGPT
  • 5.5 SenseAuto
    • 5.5.1 New Embedded Model Piccolo2
    • 5.5.2 UniAD True End-to-end Perception and Decision Integrated Foundation Model
    • 5.5.3 DriveAGI & SenseNova 5.0
    • 5.5.4 ADNN Chip Heterogeneous Computing Platform
    • 5.5.5 Deployment of Native Large Multimodal Model on Vehicles
    • 5.5.6 Latest Dynamics
  • 5.6 Momenta
    • 5.6.1 Data Closed Loop
    • 5.6.2 Mapless Intelligent Driving Algorithm and High-level Intelligent Driving Solution
    • 5.6.3 Latest Dynamics
  • 5.7 Freetech
    • 5.7.1 Data Closed-Loop Platform Product - FUGA
  • 5.8 Nullmax
    • 5.8.1 One-stop Data-in-the-loop Platform
    • 5.8.2 Multimodal End-to-end + Secure Brain-inspired Intelligence
    • 5.8.3 Full Automated Data Process
    • 5.8.4 Growable Algorithm Platform
  • 5.9 DeepRoute.ai
    • 5.9.1 End-to-end
    • 5.9.2 End-to-end High-level Intelligent Driving Platform DeepRoute IO
    • 5.9.3 Deeproute-Driver
    • 5.9.4 D-PRO
    • 5.9.5 D-AIR
  • 5.10 Bosch
    • 5.10.1 Data Closed Loop
    • 5.10.2 High-level Intelligent Driving
  • 5.11 EXCEEDDATA
    • 5.11.1 Vehicle-Cloud Data Base
    • 5.11.2 Vehicle-Cloud Data Base - Flexible Data Collection
    • 5.11.3 Vehicle-Cloud Data Base - Flexible Data Warehouse
    • 5.11.4 Vehicle-Cloud Data Base - Application in Scenarios
    • 5.11.5 Vehicle-Cloud Integrated Tool Chain
      • 5.11.5.1 Vehicle-Cloud Integrated Tool Chain (1)
      • 5.11.5.2 Vehicle-Cloud Integrated Tool Chain (2)
      • 5.11.5.3 Vehicle-Cloud Integrated Tool Chain (3)
      • 5.11.5.4 Vehicle-Cloud Integrated Tool Chain (4)
      • 5.11.5.5 Vehicle-Cloud Integrated Tool Chain (4)
      • 5.11.5.6 Vehicle-Cloud Integrated Tool Chain (4)
      • 5.11.5.7 Vehicle-Cloud Integrated Tool Chain (4)
    • 5.11.6 Application Case of Vehicle-Cloud Integrated Tool Chain
  • 5.12 Yoocar
    • 5.12.1 Business Layout
    • 5.12.2 Connection Solution
    • 5.12.3 Autonomous Driving Data Closed-Loop Tool Chain Platform
  • 5.13 Mxnavi
    • 5.13.1 Profile
    • 5.13.2 Development History
    • 5.13.3 Crowd-sourced Map Solution
    • 5.13.4 Crowd-sourced Map System Architecture
    • 5.13.5 Crowd-sourced Map System: Mapping Process
    • 5.13.6 Crowd-sourced Map System: Map Elements
    • 5.13.7 Crowd-sourced Map System: Intelligent Driving Function Scenarios
    • 5.13.8 Crowd-sourced Automated Production System
    • 5.13.9 Crowd-sourced Map System: Map Engine Architecture
    • 5.13.10 Crowd-sourced Map System: Multi-source Fusion Location Solution Based on Visual Perception
    • 5.13.11 Crowd-sourced Map System: Data Compliance Architecture
    • 5.13.12 Partners
  • 5.14 NavInfo
    • 5.14.1 Data Compliance Closed Loop
    • 5.14.2 One Map Data Platform
    • 5.14.3 Lightweight Map Product - HD Lite
    • 5.14.4 Lightweight Version of NOP System - NOP Lite
    • 5.14.5 NI in Car Intelligent Integrated Solution
    • 5.14.6 AutoChips' Chip Series
    • 5.14.7 Pachira's DeepThinking Foundation Model
    • 5.14.8 Sixents Technology's Orion
    • 5.14.9 "Vehicle-Road-Cloud Integration" Solution
    • 5.14.10 Latest Dynamics

6 Data Closed Loop of Typical OEMs

  • Summary: Data Closed Loop Capabilities of OEMs (1)
  • Summary: Data Closed Loop Capabilities of OEMs (2)
  • 6.1 BYD
    • 6.1.1 "Vehicle Intelligence" Strategy
    • 6.1.2 Data Accumulation Capabilities
    • 6.1.3 Data Closed Loop - Algorithm Capabilities
    • 6.1.4 Data Closed Loop - Computing Capabilities
    • 6.1.5 "Eyes of God" High-level Intelligent Driving System
  • 6.2 Chery
    • 6.2.1 ZDrive.ai - Profile
    • 6.2.2 ZDrive.ai - Data Closed-Loop Capabilities
    • 6.2.3 ZDrive.ai - Zhuojie Joint Innovation Center
    • 6.2.4 ZDrive.ai - Latest Dynamics
  • 6.3 Great Wall Motor
    • 6.3.1 Intelligent Driving System
    • 6.3.2 SEE End-to-End Intelligent Driving Foundation Model
    • 6.3.3 Supercomputing Center
  • 6.4 Geely
    • 6.4.1 Zeekr Haohan Intelligent Driving 2.0 All-Scenario End-to-End
    • 6.4.2 SuperVision Solution of Zeekr NZP
    • 6.4.3 Xingrui Intelligent Computing Center
    • 6.4.4 Intelligent Driving Cloud Data Factory
    • 6.4.5 Intelligent Driving Closed-Loop System
    • 6.4.6 ROBO Galaxy Tool Chain Solution Process Solution
    • 6.4.7 Data Production Modes
    • 6.4.8 Self-developed Algorithm Underlying Software Abstraction
    • 6.4.9 Intelligent Driving Self-development SOA Design
    • 6.4.10 Fully Self-developed Cockpit Operating System
    • 6.4.11 Global Platform Operation System
  • 6.5 Li Auto
    • 6.5.1 Large Multimodal Cognitive Model
    • 6.5.2 Intelligent Driving End-to-end Solution
    • 6.5.3 Algorithm Architecture of Intelligent Driving 3.0
    • 6.5.4 Mapless NOA
    • 6.5.5 Intelligent Laboratory
    • 6.5.6 Progress in Self-developed Chips
  • 6.6 Xpeng
    • 6.6.1 Adjustment of Organizational Structure of Autonomous Driving Department
    • 6.6.2 End-to-end System
    • 6.6.3 Evolution of XNGP
    • 6.6.4 XNGP's Closed-Loop Data Iteration System
    • 6.6.5 Self-developed Chips
    • 6.6.6 Fuyao Intelligent Computing Center
  • 6.7 NIO
    • 6.7.1 Intelligent Driving World Model
    • 6.7.2 New Intelligent Driving Architecture
    • 6.7.3 Swarm Intelligence
    • 6.7.4 Self-developed Chips

7 Data Closed Loop Development Trends

  • 7.1 Trend 1
  • 7.2 Trend 2
  • 7.3 Trend 3
  • 7.4 Trend 4
  • 7.5 Trend 5
  • 7.6 Trend 6
  • 7.7 Trend 7
  • 7.8 Trend 8
  • 7.9 Trend 9