The National Highway Traffic Safety Administration is widening its investigation into advanced driver assistance systems, examining how automakers market, implement, and monitor Level-2 semi-autonomous features across multiple brands.

The expanded review comes after a series of high-profile crashes involving vehicles using systems like Tesla’s Autopilot, GM’s Super Cruise, and Ford’s BlueCruise. NHTSA officials say the goal is to establish clearer safety standards and prevent drivers from over-relying on technology that still requires full human attention.

There are those of us who remember when cruise control was the height of driving automation. Set your speed on the highway, keep your foot off the gas, and that was it. Level-2 systems have changed the game entirely. They can steer, accelerate, brake, and even change lanes with minimal driver input.

But here’s the catch: despite what the marketing materials might suggest, these systems are not self-driving cars. They require constant supervision, hands on or near the wheel, and eyes on the road. The gap between what the technology can do and what drivers think it can do has become a safety concern.

NHTSA’s expanded review will look at roughly a dozen automakers and their respective driver assistance technologies. The agency is requesting detailed data on how these systems function, how they monitor driver attention, what happens when drivers disengage or ignore warnings, and how companies track real-world performance after vehicles leave the lot. Tesla’s Autopilot and Full Self-Driving (which, despite its name, is still Level-2) are under particular scrutiny, but the investigation extends to GM’s Super Cruise and Ultra Cruise, Ford’s BlueCruise, Mercedes-Benz’s Drive Pilot, and similar systems from BMW, Nissan, and others.

One major focus is driver monitoring. Some systems, like Super Cruise, use infrared cameras to track eye movement and ensure the driver is paying attention. Others rely on torque sensors in the steering wheel, which can be easily fooled with a weighted object.

NHTSA wants to know which approach actually keeps drivers engaged and which gives a false sense of security. The agency is also examining how automakers name and advertise these features. Terms like “Autopilot” and “Full Self-Driving” may lead drivers to believe the car is more capable than it actually is.

The review comes on the heels of multiple fatal crashes involving Level-2 systems. In several cases, drivers using Autopilot collided with stationary emergency vehicles or failed to navigate construction zones. Other incidents involved drivers who appeared to be distracted or asleep while their vehicles were in semi-autonomous mode. While the technology has the potential to reduce accidents caused by human error, it also introduces new risks when drivers misunderstand its limitations.

For consumers, this expanded review could lead to significant changes. Automakers may be required to implement more robust driver monitoring systems. Marketing language might get toned down or regulated.

Some features could be disabled or restricted until new safety protocols are in place. On the flip side, clearer standards could give buyers more confidence in these technologies, knowing they’ve been vetted more thoroughly.

The auto industry has responded cautiously. Most manufacturers insist their systems are safe when used as intended and point to data showing reduced crash rates in vehicles equipped with Level-2 features. But there’s also acknowledgment that driver education has lagged behind the technology.

Some companies have already started updating their systems with more persistent warnings and stricter monitoring. Tesla, for its part, has faced the most criticism but continues to roll out updates to Autopilot and FSD through over-the-air software patches.

NHTSA has not announced a timeline for completing the review, but agency officials have indicated that new regulations could be proposed by late 2026 or early 2027. In the meantime, the message to drivers remains the same: keep your hands on the wheel and your eyes on the road, no matter how smart your car claims to be.

Level-2 is assistance, not autonomy. The technology is impressive, but it’s not ready to take over completely. And until regulators, automakers, and drivers all get on the same page about what these systems can and cannot do, the risks will remain.

Follow Us