The Xiaomi SU7 Incident: Young People Educated by Intelligent Driving
Intelligent driving features have become standard in new energy vehicles, and young people are sharing their experiences with this emerging technology. Initially, they expressed excitement on social media about how “technology changes lives,” but now they also document incidents caused by “intelligent driving failure.” A tragic example is the high-speed collision and fire involving a Xiaomi SU7 on March 29, which resulted in the deaths of three female university students. The vehicle collided with a cement post in a construction zone that was not identified in time, and prior to the accident, it was reportedly in NOA (Navigation on Autopilot) mode. This incident has reignited public concerns about the safety of intelligent driving and the responsibilities of car manufacturers.
Recently, Dingjiao One spoke with several young car owners who experienced accidents while using intelligent driving, including brands like Xpeng, Li Auto, and NIO. These drivers, aged 20-35, were originally enthusiastic supporters of intelligent driving, inherently trusting that “algorithms outperform humans.” However, accidents have taught them that algorithms may not recognize fire hydrants and that the flexibility involved in lane changing may come with risks of misjudgment. Furthermore, the systems may “milliseconds before collision” disengage from intelligent driving, leaving humans to take the blame.
Post-accident interactions with car manufacturers have added to their frustration. Regardless of whether they acknowledge the role of intelligent driving in the accidents, many owners find it challenging to receive compensation due to “lack of legal support.” Some owners only learned after their accidents that manufacturers promote their systems as “nearly L3” and “high-level intelligent driving,” which sounds more advanced than “assisted driving,” but fundamentally remains at L2 level. Under current laws, the driver bears primary responsibility in the event of an accident. Even if some owners manage to secure compensation through social media appeals or extended negotiations with manufacturers, the overall process leaves them exhausted.
One owner reflected, “Drivers should not blindly trust company promotions,” and called for manufacturers to discuss “responsibility” alongside their claims of disruption. The year 2025 is anticipated to mark the dawn of widespread intelligent driving, with its capabilities being crucial for domestic automakers. Recently, Shi Jianhua, deputy secretary-general of the China Electric Vehicle Hundred Forum, stated that the penetration rate for L2 assisted driving is projected to exceed 55% in 2024, with NOA penetration reaching 11%. End-to-end automated driving (urban NOA) is also expected to accelerate, with a forecasted penetration rate of 20% by 2025. However, incidents like the Xiaomi SU7 crash have awakened many to the fact that genuine “intelligent driving equality” requires not only algorithm upgrades and price reductions from manufacturers but also improved safeguards to ensure more people can use the technology safely.
Why do these intelligent driving incidents occur? Promotional videos often depict futuristic scenarios with steering wheels rotating on their own, but the reality can be quite different. Based on the accounts of car owners and analyses from industry professionals, accidents that occur while in intelligent driving mode often stem from three main causes: perception failures, system misjudgments, and human-machine interaction conflicts. Perception failures can be understood simply as “machine blind spots.” For instance, in late December, a certain automobile brand pushed a new intelligent driving software version to users, introducing a seamless “parking from parking space” feature. One day in January, a car owner in Guilin, Guangxi, used this new function in a parking garage, but the car unexpectedly collided with a fire hydrant. After the incident, the manufacturer attributed the incident to “non-standard parking,” claiming the fire hydrant was a “special obstacle.” However, the owner was baffled as she believed that if the car had stopped upon encountering the fire hydrant, it would have only sustained minor scratches.
The second cause of accidents is system misjudgment, which can be understood as “algorithm confusion.” In early 2025, another car owner nearly experienced a lane change accident while driving at over 70 km/h on a ramp in Wuhan. The car’s system suddenly attempted to change lanes into an adjacent lane, misidentifying a concrete barrier as a passable lane. Fortunately, with 14 years of driving experience, he quickly yanked the steering wheel and braked to avoid disaster. Following the incident, he reported the issue to customer service, which acknowledged the system’s “misjudgment of lane change intentions” and stated that they would optimize it through an OTA update.
The third cause of accidents involves human-machine interaction conflicts. One owner recalled that while using the urban NCA feature, the vehicle suddenly swerved right without warning, resulting in a collision with a neighboring car. The report provided by the service center indicated that the system had disengaged from intelligent driving mere milliseconds before the collision. The owner stressed that he had not received any takeover prompts during the incident and felt that human reaction times were insufficient to respond adequately.
These issues highlight the common risks associated with current intelligent driving technology—while algorithms perform reliably in 99% of regular scenarios, they may suddenly fail in 1% of edge cases, such as irregular barriers or temporary roadblocks. The contrast between manufacturers’ promotional material and their after-sales responses has left many owners feeling disillusioned. One owner noted that during the sales pitch, he was told about “zero takeover” capabilities, yet after the accident, the manufacturer insisted that their system was merely L2 and that the driver bore responsibility.
In the current legal framework, consumers face significant challenges in asserting their rights concerning intelligent driving incidents. One accident victim shared that their ability to seek compensation often hinges on their personal advocacy skills. If they lack the capability to assert their rights, they might not receive any compensation, while those with stronger advocacy skills might achieve some form of restitution. After a recent accident, another car owner faced the frustrating reality of being deemed fully responsible despite the system’s conflicting commands under complex road conditions, illustrating the difficulties consumers encounter.
After such accidents, many owners find themselves in a “legal no-man’s land,” leading some to turn to social media to seek public attention for their incidents. For instance, after expressing dissatisfaction with the service center’s handling of her case, one owner posted about her experience online, prompting a prompt and favorable response from the manufacturer. As companies are highly sensitive to negative publicity, they often respond quickly to mitigate potential damage.
As intelligent driving technology continues to evolve, car manufacturers must prioritize safety over sales figures. It is critical that they establish clear standards for responsibility in the event of accidents and provide consumers with rights to their driving data. Furthermore, there is an urgent need for new regulations that mandate the installation of “black boxes” in vehicles to trace accident responsibilities and introduce true intelligent driving insurance. Only by addressing these foundational issues can intelligent driving move toward a safer future.