1. AI5 chip design is finalized, and the FSD core brain is about to enter mass production
As the successor to the HW4.0 hardware platform, the AI5 chip is the core of Tesla's future autonomous driving strategy. The successful tape-out signifies that the chip design has been officially frozen and entered the preparation phase for mass production. According to official disclosures, AI5 offers a 40-fold improvement in comprehensive performance compared to HW4, with an 8-fold increase in raw computing power and a significant jump in memory capacity from 16GB to 144GB (a 9-fold increase). The single-chip AI computing power is close to 2500 TOPS, specifically optimized for Transformer large model inference.
In terms of mass production, AI5 will adopt a dual-outsourcing strategy with TSMC and Samsung to diversify supply chain risks and ensure production capacity. It is expected to undergo trial production by the end of 2026 and achieve large-scale mass production in 2027. This chip will not only be equipped in the next-generation FSD system, but also serve as the core computing unit for the Optimus humanoid robot and the CyberCab driverless taxi. Musk has called it "the optimal inference chip for models with parameters under 250 billion.".

II. A16 relay research and development, dual-line OEM lays a solid foundation for production capacity
While advancing the mass production of AI5, Tesla has initiated the research and development of the next-generation A16 chip, continuing its "self-research + foundry" model. The A15 chip has been confirmed to be jointly produced by TSMC and Samsung, with Samsung's Texas factory set to become one of the exclusive production bases, involving a cooperation amount of up to $16.5 billion. As an iterative product of A15, A16 will further optimize the manufacturing process and energy efficiency, and is expected to adopt more advanced technology, continuously widening the performance gap with traditional automotive chips.
Musk once emphasized that the performance leap of A15 compared to HW4 "far exceeds any industry upgrade", and A16 will continue to make breakthroughs on this basis, covering full scenarios such as autonomous driving, robotics, and edge computing. The dual-line foundry strategy not only ensures the supply of advanced processes but also reduces costs through competition, laying the foundation for the large-scale implementation of Tesla's AI hardware.
III. Dojo3 restarts, supercomputing power matches chip iteration
Following the successful tape-out of AI5, Tesla has restarted the Dojo3 supercomputer project. The project, which was suspended in August 2025, is now being revived leveraging the AI5 and A16 chip architectures to build a "chip-supercomputer" collaborative ecosystem. Dojo3 adopts a modular design, with a single motherboard capable of integrating 512 AI5/A16 chips, significantly enhancing computing density. It is specifically designed for FSD large model training, robot algorithm optimization, and massive driving data processing.
Compared to its predecessor Dojo, Dojo3 will achieve a leap from "monthly" to "weekly" training cycles, supporting Tesla's closed-loop iteration of millions of vehicle fleet data globally. At the same time, Dojo3 will open up part of its computing power for commercial use, driving Tesla's transformation from an automaker to an AI computing power service provider, directly competing with giants such as NVIDIA and AWS.
HongKong.info Committed to providing fair and transparent reports. This article aims to provide accurate and timely information, but should not be construed as financial or investment advice. Due to the rapidly changing market conditions, we recommend that you verify the information yourself and consult a professional before making any decisions based on this information.