FSD Feedback Funnel

Jul 20, 2025

Tesla started their path towards full self-driving (FSD) with looking at the most basic part of driving: staying in the lane. This is Autopilot. The car also had the ability to slow down, speed up, and turn around bends on the highway. They used a set of rules in the code to enable this. When the driver hit the brake pedal or turned the steering wheel far enough Autopilot would turn off and the human had to take full control. These disengagements provided basic data to Tesla on when the system was making a mistake.

The next step was shadow mode. This software was running while humans drove their Teslas around. It was watching and learning how humans normally drive to learn more than just staying in its lane and maintaining a safe distance from others. Shadow mode collected an immense amount of data and it was more specific data than the disengagements could provide.

Shadow mode enabled Enhanced Autopilot and Navigate on Autopilot. These feature sets allowed the car to change lanes to pass slower traffic and drive from one highway to another based on where the map guidance was directing. These features are more complicated and required even more rules to be added to the code.

The last feature added to Autopilot was the ability to recognize and obey stop signs and traffic lights. Again, more shadow mode learning, more specific driving needs, and more rules added to the code.

To take the next big leap Tesla had to change their code from a massive set of rules to neural nets (software that simulated how we think) in FSD version 12*. This would allow Tesla vehicles to drive on any road by simulating human decision making and input. Once Tesla got this step another feature became increasingly important for the system to get more feedback than a disengagement could provide. There was a camera button on the touchscreen that drivers could tap when the car did something not quite right. It wasn’t a big enough issue that the driver needed to take over immediately but it was something that it shouldn’t do again. Like if the car passed a lower speed limit sign and the car didn’t slow down or should have yielded to a pedestrian waiting on the sidewalk. Drivers at anytime could tap that button and provide specific feedback to Tesla, basically saying, “look at this.”

Over the course of these updates Tesla engineers were refining the system to have fewer issues which lead to fewer disengagements or interventions**. While this was great since it meant the system was getting better, it also meant that more miles and more time would pass before they could collect the same amount of meaningful data that they could use to improve the system. It also meant that the issues that did come up weren’t always obvious. Tesla engineers would have to review the data to understand why what the car did was wrong and infer why the driver thought the issue was an issue. This is where the voice recordings came in. When a disengagement occurred, the driver had the option to record a short voice message to help the engineers understand the problem. This could help them identify and fix even more specific issues.

Several months later, Tesla released version 13 of FSD and the issues and feedback were becoming extremely specific. So much so that its not just about traffic laws, its about what being the best driver on the road truly is. In some cases, what the best driver would do can be subjective. For a lot of the drivers on the road, speed takes precedence over safety at times but Tesla wants the safest fleet on the road. To ensure that happened with the launch of Robotaxi rides in Austin, safety monitors were added to the passenger seat for every ride. They are there to provide hyper specific feedback. They do this based on how Tesla has trained them, which is with safety above all else. A robotaxi isn’t focused on saving an extra minute on the ride if shooting for a small gap in traffic is a risky maneuver. The robotaxi should wait and play it safe. While in the car the safety monitor has multiple ways to command the car if its about to do something it shouldn’t or they can take a snapshot to record a moment. After the rides are over, they can prepare a write up of specific examples and add helpful context.

Tesla is at the cusp of achieving full self-driving using only cameras and AI. They will soon be able to flip a switch (push a software update) and enable fully autonomous driving in millions of cars at once. In addition to the numerous others, this is one reason why they will be able to do it. They built a funnel that provides them the right level of feedback based on where they are in the process. The more specific the feedback, the more 9s they can add to their reliability score.

*I should say by now that I am simplifying about 10 years of history and skipping over FSD version 8 through 11. Those versions were more of the same rule based code and began to enable city driving but they weren’t nearly as smooth and as safe as version 12+.

**Interventions are when drivers help the car drive by tapping the accelerator as a way of giving it a gentle push to say “go on, you can do it.” These have also been part of the feedback loop since Autopilot.