No longer relegated to science fiction novels, self-driving cars are becoming more and more within reach. In recent years, companies as diverse as Google and Tesla have made high-profile forays into the realm of autonomous and semi-autonomous vehicles. Their goal is to reduce the risk of accidents by taking human error out of the equation.

But could these advances backfire? A string of recent accidents involving Tesla’s Autopilot function suggests so.

Several accidents tied to the Autopilot feature

Since its release, Autopilot has been linked with several accidents, including one fatality. A Pennsylvania driver recently crashed his Tesla Model X SUV shortly after disengaging Autopilot. A driver in Montana admitted to using Autopilot when his vehicle veered into a guardrail. And Autopilot was involved in a deadly collision between a Tesla Model S and a semitruck on a Florida highway.

Autopilot not really true to its name

Autopilot is a driver assistance technology designed to keep the vehicle within the lane and match the speed of surrounding traffic. However, contrary to its name, it is not supposed to give the driver a free ride. Drivers must still keep their hands on the steering wheel and remain engaged and attentive to what’s going on around them.

In the wake of recent crashes, Consumer Reports has urged Tesla to deactivate the Autopilot feature until it can be reintroduced with appropriate safeguards – and a different name. The watchdog publication noted that the term “Autopilot” is “misleading and potentially dangerous.”

While autonomous features still hold promise for making the road safer, it appears that, for now, the fruits of that promise remain largely in the future.