市場調査レポート
商品コード
1482384

中国のエンドツーエンド自動運転(E2E AD)産業(2024年)

End-to-end Autonomous Driving (E2E AD) Research Report, 2024

出版日: | 発行: ResearchInChina | ページ情報: 英文 200 Pages | 納期: 即日から翌営業日

価格
価格表記: USDを日本円(税抜)に換算
本日の銀行送金レート: 1USD=149.62円
中国のエンドツーエンド自動運転(E2E AD)産業(2024年)
出版日: 2024年04月10日
発行: ResearchInChina
ページ情報: 英文 200 Pages
納期: 即日から翌営業日
  • 全表示
  • 概要
  • 目次
概要

1. 中国のエンドツーエンドソリューションの現状

エンドツーエンド自動運転システムとは、センサーデータ入力(カメラ画像、LiDARなど)から制御コマンド出力(ステアリング、加減速など)への直接マッピングを指します。1988年のALVINNプロジェクトで初めて登場しました。入力としてカメラとレーザー距離計を使用し、出力としてステアリングを生成するシンプルなニューラルネットワークを使用します。

2024年初頭、TeslaはFSD V12.3を発表し、驚異的なインテリジェントドライビングレベルを大々的に取り上げました。このエンドツーエンド自動運転ソリューションは、中国のOEMや自動運転ソリューション企業から広く注目を集めています。

従来のマルチモジュールソリューションと比べ、エンドツーエンド自動運転ソリューションは知覚、予測、計画を1つのモデルに統合し、ソリューション構造を簡素化します。視覚入力に従って直接運転判断を下す人間のドライバーをシミュレートでき、モジュールソリューションのロングテールシナリオに効果的に対処し、モデルのトレーニング効率と性能を向上させることができます。

Li Autoのエンドツーエンドソリューション

Li Autoは、完全なエンドツーエンドモデルは知覚、追跡、予測、意思決定、計画の全プロセスをカバーすべきであり、それがL3自動運転を実現する最適なソリューションであると考えています。2023年、Li AutoはAD Max3.0を推進し、全体的なフレームワークはエンドツーエンドのコンセプトを反映していますが、完全なエンドツーエンドソリューションとはまだギャップがあります。2024年、Li Autoはシステムを完全なエンドツーエンドソリューションに昇格させる見込みです。

Li Autoの自動運転フレームワークは以下の通りで、2つのシステムで構成されています。

高速システム:システム1、Li Autoの既存のエンドツーエンドソリューションで、周囲の状況を認識した後、直接実行される。

低速システム:システム2、未知のL4シナリオの問題を解決するため、論理的に考え、未知の環境を探索するマルチモーダルな大規模言語モデル。

エンドツーエンドソリューションを推進する過程で、Li Autoは計画/予測モデルと知覚モデルを統合し、パーキングをドライビングと統合するために、独自の基盤でエンドツーエンドのTemporal Plannerを達成する計画です。

2. データがエンドツーエンドソリューション実装への鍵となります。

エンドツーエンドソリューションの実装には、研究開発チームの構築、ハードウェア設備、データ収集と処理、アルゴリズムの訓練と戦略のカスタマイズ、検証と評価、プロモーション、量産といったプロセスが必要です。

エンドツーエンド自動運転ソリューションにおける統合トレーニングは、膨大なデータを必要とするため、データの収集と処理で困難に直面します。

まず、走行データと道路、天候、交通状況などのシナリオデータを含むデータを収集するのに長い時間とチャネルが必要です。実際の運転では、ドライバーの前方視界内のデータは比較的収集しやすいですが、周囲の情報は困難です。

データ処理の際には、データ抽出次元の設計、膨大な動画からの効果的な特徴量の抽出、データ分布の統計などを行い、大規模なデータ学習を支援する必要があります。

3. エンドツーエンドソリューションが、具現化ロボットの応用を加速します。

自動運転車に加え、具現化ロボットもエンドツーエンドソリューションの主流シナリオです。エンドツーエンド自動運転からロボットまで、より複雑で多様な実際の応用シナリオに適応するため、より普遍的な世界モデルを構築する必要があります。主流のAGI(人工汎用知能)の開発フレームワークは、以下の2つの段階に分けられます。

第1段階:基本的な基盤モデルの理解と生成を統一し、さらに具現化AIと組み合わせて統一的な世界モデルを形成する

第2段階:世界モデル+複雑なタスクの計画と制御、抽象概念の誘導の能力が、徐々に対話型AGI 1.0の時代へと進化する

当レポートでは、中国のエンドツーエンド自動運転(E2E AD)産業について調査分析し、自動運転の現状、開発動向、応用事例などの情報をまとめています。

目次

第1章 エンドツーエンド自動運転技術の基礎

  • エンドツーエンド自動運転の用語と概念
  • エンドツーエンド自動運転の現状
  • エンドツーエンドE2E-AD動作計画モデルの比較
  • エンドツーエンドE2E-ADモデルの比較
  • エンドツーエンド自動運転E2E-ADモデルの典型例
  • 具現化言語モデル(ELM)

第2章 エンドツーエンド自動運転の技術ロードマップと開発動向

  • シナリオの問題
  • 開発動向

第3章 乗用車分野におけるエンドツーエンド自動運転の応用

  • 国内のエンドツーエンド自動運転企業の動向
  • DeepRoute.ai
  • NIO
  • Xpeng
  • Li Auto

第4章 ロボット分野におけるエンドツーエンド自動運転の応用

  • ヒューマノイドロボットに向けたエンドツーエンド技術の進歩
  • ゼロデモンストレーション自律ロボットオープンソースモデル:Oモデル
  • NvidiaのProject GR00T
  • 基盤モデル+ロボットの現状と将来

第5章 エンドツーエンド自動運転プロジェクトをどのように実装するか

  • E2E-ADプロジェクト実装例:Tesla
  • E2E-ADプロジェクト実装例:Wayve
  • チームビルディングとプロジェクト予算
  • 自動車E2E自動運転システム設計
  • クラウドE2E自動運転システム設計
目次
Product Code: GX012

End-to-end Autonomous Driving Research: status quo of End-to-end (E2E) autonomous driving

1. Status quo of end-to-end solutions in China

An end-to-end autonomous driving system refers to direct mapping from sensor data inputs (camera images, LiDAR, etc.) to control command outputs (steering, acceleration/deceleration, etc.). It first appeared in the ALVINN project in 1988. It uses cameras and laser rangefinders as input and a simple neural network to generate steering as output.

In early 2024, Tesla rolled out FSD V12.3, featuring an amazing intelligent driving level. The end-to-end autonomous driving solution garners widespread attention from OEMs and autonomous driving solution companies in China.

Compared with conventional multi-module solutions, the end-to-end autonomous driving solution integrates perception, prediction and planning into a single model, simplifying the solution structure. It can simulate human drivers making driving decisions directly according to visual inputs, effectively cope with long tail scenarios of modular solutions and improve the training efficiency and performance of models.

Li Auto's end-to-end solution

Li Auto believes that a complete end-to-end model should cover the whole process of perception, tracking, prediction, decision and planning, and it is the optimal solution to achieve L3 autonomous driving. In 2023, Li Auto pushed AD Max3.0, with overall framework reflecting the end-to-end concept but still a gap with a complete end-to-end solution. In 2024, Li Auto is expected to promote the system to become a complete end-to-end solution.

Li Auto's autonomous driving framework is shown below, consisting of two systems:

Fast system: System 1, Li Auto's existing end-to-end solution which is directly executed after perceiving the surroundings.

Slow system: System 2, a multimodal large language model that logically thinks and explores unknown environments to solve problems in unknown L4 scenarios.

In the process of promoting the end-to-end solution, Li Auto plans to unify the planning/forecast model and the perception model, and accomplish the end-to-end Temporal Planner on the original basis to integrate parking with driving.

2. Data becomes the key to the implementation of end-to-end solutions.

The implementation of an end-to-end solution requires processes covering R&D team building, hardware facilities, data collection and processing, algorithm training and strategy customization, verification and evaluation, promotion and mass production. Some of the sore points in scenarios are as shown in the table:

The integrated training in end-to-end autonomous driving solutions requires massive data, so one of the difficulties it faces lies in data collection and processing.

First of all, it needs a long time and may channels to collect data, including driving data and scenario data such as roads, weather and traffic conditions. In actual driving, the data within the driver's front view is relatively easy to collect, but the surrounding information is hard to say.

During data processing, it is necessary to design data extraction dimensions, extract effective features from massive video clips, make statistics of data distribution, etc. to support large-scale data training.

DeepRoute

As of March 2024, DeepRoute.ai's end-to-end autonomous driving solution has been designated by Great Wall Motor and involved in the cooperation with NVIDIA. It is expected to adapt to NVIDIA Thor in 2025. In the planning of DeepRoute.ai, the transition from the conventional solution to the "end-to-end" autonomous driving solution will go through sensor pre-fusion, HD map removal, and integration of perception, decision and control.

GigaStudio

DriveDreamer, an autonomous driving model of GigaStudio, is capable of scenario generation, data generation, driving action prediction and so forth. In the scenario/data generation, it has two steps:

When involving single-frame structural conditions, guide DriveDreamer to generate driving scenario images, so that it can understand structural traffic constraints easily.

Extend its understanding to video generation. Using continuous traffic structure conditions, DriveDreamer outputs driving scene videos to further enhance its understanding of motion transformation.

3. End-to-end solutions accelerate the application of embodied robots.

In addition to autonomous vehicles, embodied robots are another mainstream scenario of end-to-end solutions. From end-to-end autonomous driving to robots, it is necessary to build a more universal world model to adapt to more complex and diverse real application scenarios. The development framework of mainstream AGI (General Artificial Intelligence) is divided into two stages:

Stage 1: the understanding and generation of basic foundation models are unified, and further combined with embodied artificial intelligence (embodied AI) to form a unified world model;

Stage 2: capabilities of world model + complex task planning and control, and abstract concept induction gradually evolve into the era of the interactive AGI 1.0.

In the landing process of the world model, the construction of an end-to-end VLA (Vision-Language-Action) autonomous system has become a crucial link. VLA, as the basic foundation model of embodied AI, can seamlessly link 3D perception, reasoning and action to form a generative world model, which is built on the 3D-based large language model (LLM) and introduces a set of interactive markers to interact with the environment.

As of April 2024, some manufacturers of humanoid robots adopting end-to-end solutions are as follows:

For example, Udeer*AI's Large Physical Language Model (LPLM) is an end-to-end embodied AI solution that uses a self-labeling mechanism to improve the learning efficiency and quality of the model from unlabeled data, thereby deepening the understanding of the world and enhancing the robot's generalization capabilities and environmental adaptability in cross-modal, cross-scene, and cross-industry scenarios.

LPLM abstracts the physical world and ensures that this kind of information is aligned with the abstract level of features in LLM. It explicitly models each entity in the physical world as a token, and encodes geometric, semantic, kinematic and intentional information.

In addition, LPLM adds 3D grounding to the encoding of natural language instructions, improving the accuracy of natural language to some extent. Its decoder can learn by constantly predicting the future, thus strengthening the ability of the model to learn from massive unlabeled data.

Table of Contents

1. Foundation of End-to-end Autonomous Driving Technology

  • 1.1 Terminology and Concept of End-to-end Autonomous Driving
    • 1.1.1 Terminology Explanation of End-to-end Autonomous Driving
    • 1.1.2 Development History of End-to-end Autonomous Driving (1)
    • 1.1.3 Development History of End-to-end Autonomous Driving (2)
  • 1.2 Status Quo of End-to-end Autonomous Driving
    • 1.2.1 Development History of Autonomous Driving Algorithm Industrialization
    • 1.2.2 Status Quo of E2E-AD Model Mass Production
    • 1.2.3 Progress and Challenges of E2E-AD
  • 1.3 Comparison among End-to-end E2E-AD Motion Planning Models
    • 1.3.1 End-to-end E2E-AD Trajectory Planning of Autonomous Driving: Comparison among Several Classical Models in Industry and Academia
    • 1.3.2 Tesla: Perception and Decision-making Full Stack Integrated Model
    • 1.3.3 Model 2
    • 1.3.4 Model 3
    • 1.3.5 Model 4
    • 1.3.6 Model 5
  • 1.4 Comparison among End-to-end E2E-AD Models
    • 1.4.1 Horizon Robotics VADv2: An End-to-end Driving Model Based on Probability Programming
    • 1.4.2 Model 2
    • 1.4.3 Model 3
    • 1.4.4 Model 4
    • 1.4.5 Model 5
  • 1.5 Typical Cases of End-to-end Autonomous Driving E2E-AD Models
    • 1.5.1 Case 1 - SenseTime's E2E-AD Model: UniAD
    • 1.5.2 Case 2
    • 1.5.3 Case 3
  • 1.6 Embodied Language Models (ELMs)
    • 1.6.1 ELMs accelerate the landing of End-to-end Solutions
    • 1.6.2 Foundation Model Application scenarios of ELMs (1)
    • 1.6.2 Foundation Model Application scenarios of ELMs (2)
    • 1.6.2 Foundation Model Application scenarios of ELMs (3)
    • 1.6.2 Foundation Model Application scenarios of ELMs (4)
    • 1.6.2 Foundation Model Application scenarios of ELMs (5)
    • 1.6.2 Foundation Model Application scenarios of ELMs (6)
    • 1.6.2 Foundation Model Application scenarios of ELMs (7)
    • 1.6.3 Limitations and Positive Effects of ELMs

2 Technology Roadmap and Development Trends of End-to-end Autonomous Driving

  • 2.1 Scenario Difficulties
    • 2.1.1 Scenario Difficulties and Solutions: Computing Power Supply/Data Acquisition
    • 2.1.2 Scenario Difficulties and Solutions: Team Building/Interpretability
  • 2.2 Development Trends
    • 2.2.1 Trend 1
    • 2.2.2 Trend 2
    • 2.2.3 Trend 3
    • 2.2.4 Trend 4
    • 2.2.5 Trend 5: Universal World Model: Three Paradigms and System Construction of AGI
    • 2.2.6 Trend 6
    • 2.2.7 Trend 7

3 Application of End-to-end Autonomous Driving in the Field of Passenger Cars

  • 3.1 Dynamics of Domestic End-to-end Autonomous Driving Companies
    • 3.1.1 Comparison among End-to-End Foundation Model Technologies of OEMs
    • 3.1.2 Comparison among End-to-End Foundation Model Technologies of Major Suppliers
    • 3.1.3 Patents on End-to-End Autonomous Driving of Intelligent Vehicles
  • 3.2 DeepRoute.ai
    • 3.2.1 Implementation Progress of End-to-end Solutions
    • 3.2.2 Difference between End-to-end Solutions and Traditional Solutions
  • 3.3 Haomo.AI
    • 3.3.1 End-to-end Solution Construction Strategy
    • 3.3.2 Reinforcement Learning/Imitation Learning Techniques
    • 3.3.3 Training Methods of End-to-end Solutions
  • 3.4 PhiGent Robotics
    • 3.4.1 Interactive Scenario Diagrams for Agents
    • 3.4.2 GraphAD Construction Path
    • 3.4.3 GraphAD Test Results
  • 3.5 Enterprise 5
  • 3.6 Enterprise 6
  • 3.7 Enterprise 7
  • 3.8 Enterprise 8
  • 3.9 Enterprise 9
  • 3.10 Enterprise 10
  • 3.11 Enterprise 11
  • 3.12 NIO
  • 3.13 Xpeng
  • 3.14 Li Auto
    • 3.14.1 Li Auto's End-to-end Solution
    • 3.14.2 Li Auto's Current Autonomous Driving Solution
    • 3.14.3 Li Auto's DriveVLM
  • 3.15 Enterprise 15
  • 3.16 Enterprise 16
  • 3.17 XX University
  • 3.18 XX University

4 Application of End-to-end Autonomous Driving in the Field of Robots

  • 4.1 Progress of End-to-end Technology for Humanoid Robots
    • 4.1.1 Humanoid Robots Are the Carrier of Embodied Artificial Intelligence
    • 4.1.2 NVIDIA GTC 2024: Several Core Humanoid Robot Companies Participating in the Conference
    • 4.1.3 Global Demand for Humanoid Robots
    • 4.1.4 Comparison among Global Humanoid Robot Features
  • 4.2 Humanoid Robot: Figure 01
    • 4.2.1 Features of Figure 01
    • 4.2.2 Working Principle of Figure 01
    • 4.2.3 Functions of Figure 01
    • 4.2.4 Development of Figure 01
  • 4.3 Zero Demonstration Autonomous Robot Open Source Model: O Model
    • 4.3.1 Implementation Principle of O Model
  • 4.4 Nvidia's Project GR00T
    • 4.4.1 Project GR00T - Robot Foundation Model Development Platform
    • 4.4.2 Project GR00T - Robot Learning and Scaling Development Workflow
    • 4.4.3 Project GR00T - Robot Isaac Simulation Platform
    • 4.4.4 Project GR00T - Omniverse Replicator Platform
  • 4.5 Robot Case 5
  • 4.6 Robot Case 6
  • 4.7 Robot Case 7
  • 4.8 Robot Case 8
  • 4.9 Robot Case 9
  • 4.10 Status Quo and Future of Foundation Models+Robots
    • 4.10.1 Application of Foundation Models in the Robot Field
    • 4.10.2 End-to-end Application and Future Prospect of Foundation Models in the Robot Field
    • 4.10.3 Future Trends of Embodied Artificial Intelligence

5 How to Implement End-to-end Autonomous Driving Projects?

  • 5.1 E2E-AD Project Implementation Case: Tesla
    • 5.1.1 Development History of Autopilot Hardware and Solutions
    • 5.1.2 Evolution of Self-developed Autopilot Hardware and Computing Power Requirements of FSD v12.3
    • 5.1.3 Autopilot: Multi-task E2E Learning Technical Solutions
    • 5.1.4 E2E Team
    • 5.1.5 Description of Most Key AI Jobs in Recruitment
    • 5.1.6 E2E R&D Investment
  • 5.2 E2E-AD Project Implementation Case: Wayve
    • 5.2.1 Profile
    • 5.2.2 Data Generation Cases of E2E
    • 5.2.3 How to Build an E2E-AD System
    • 5.2.4 Team layout
  • 5.3 Team Building and Project Budget
    • 5.3.1 Autonomous Driving Project: Comparison between Investment and Team Size
    • 5.3.2 E2E-AD Project: Top-level System Design and Organizational Structure Design
    • 5.3.3 E2E-AD Project: Development Team Layout Budget and Competitiveness Construction
    • 5.3.4 E2E-AD Project: Job Design and Description
    • 5.3.5 Cases of End-to-end Autonomous Driving Team Building of Domestic OEMs
  • 5.9 Automotive E2E Autonomous Driving System Design
    • 5.4.1 E2E-AD Project Development Business Process
    • 5.4.2 Project Business Process Reference (1)
    • 5.4.3 Project Business Process Reference (2)
  • 5.5 Cloud E2E Autonomous Driving System Design
    • 5.5.1 E2E-AD Project Business Process Reference
    • 5.5.2 E2E-AD Project Cloud Design (1)
    • 5.5.3 E2E-AD Project Cloud Design (2)