As Eastern Air Lines Flight 401 descended toward Miami Inter-national Airport on the night of Dec. 29, 1972, the flight crew noticed a light that should have confirmed the nose gear’s position failed to illuminate as expected.

Instead of proceeding with the approach, the three-person flight crew leveled the aircraft at 2,000 feet, activated the new autopilot system installed in the 5-month-old jet and immersed itself in troubleshooting the landing-gear problem. Six minutes passed before they realized something else was amiss.

“We did something to the altitude,” said the first officer, according to cockpit voice recorder transcripts. “We’re still at 2,000, right?” Captain Robert Loft then responded: “Hey, what’s happening here?”

Five seconds later, the L-1011 TriStar jet crashed into the Everglades, killing 99 of the 176 people aboard. National Transportation Safety Board investigators cited the pilots’ inattention and distraction as the probable cause of the crash. But for the first time, investigators also examined the interplay between pilots and the autopilot system in commercial aviation.

Nearly a half-century later, the relationship is better understood as a complex endeavor. Calibrating trust between the two has become an important design trait. Yet problems persist. First observed in aviation, they’re now occurring in car crashes. In investigations into fatal Tesla and Uber crashes, NTSB investigators found that human motorists responsible for driving were lulled into a false sense of security by driver-assist systems controlling the vehicle.

Experts warn that lessons learned about guarding against this “automation complacency” in aviation are not being heeded in automotive applications.

“It has to be brought into the consciousness, because these lessons have been paid for in blood,” said Frank Flemisch, a professor of human systems integration at Germany’s RWTH Aachen University who has worked in the automotive and aviation industries. “We have paid for them, so why not use them as a society? The trick will be, on one hand, to use those lessons but adapt them for this new domain. That’s not just copying the techniques, but adapting and translating them.”

One key difference is that airline pilots may have considerable time to respond to a request for a takeover from an automated system. A highway environment requires a faster reaction from a human driver, who may face an imminent hazard. Also, commercial pilots undergo far more training than everyday vehicle drivers.

“It’s not comparable,” said Antonello De Galicia, a functional safety engineer at Jaguar Land Rover who previously worked at Airbus Group, where he participated in a research project between the aviation manufacturer and French automakers to better understand potential crossover learnings. “The deep difference in their capabilities is enormous.”

But even trained professionals can succumb to the boredom that automation complacency breeds. In November 2009, the flight crew of Northwest Airlines Flight 188 overshot their Minneapolis-St. Paul airport destination by more than 100 miles because they were distracted while using a personal computer in the cockpit once they enabled autopilot for cruise flight, according to the NTSB. The flight landed an hour late without further incident.

A similar situation in the automotive realm had deadly consequences. On March 18, 2018, a safety driver behind the wheel of one of Uber’s autonomous test vehicles in Tempe, Ariz., was watching “The Voice” on her phone when the vehicle struck and killed a pedestrian.

“Automation performs remarkably well most of the time, and therein lies the problem,” NTSB member Bruce Landsberg wrote in the agency’s final report on the collision.

When automation performs in unexpected or confusing ways, problems can spiral into catastrophic outcomes. Aboard Air France Flight 447 in June 2009, an iced-over sensor led the autopilot system to erroneously believe the plane was at too low an altitude and needed to climb. As it climbed at a high pitch and triggered an alert, information was not displayed in a way that the human pilots could quickly understand the situation, a subsequent investigation found. In their confusion, the pilots exacerbated an aerodynamic stall rather than make control inputs that would ensure a recovery. The plane crashed into the Atlantic Ocean, killing all 228 people aboard.

This “mode confusion” may crop up in vehicles that have driver-assist systems, according to a 2014 study by Missy Cummings, a Duke University professor and director of the school’s Humans and Autonomy Laboratory. There’s an inherent danger in that.

“At precisely the time when the automation needs assistance, the operator could not provide it,” she wrote. “We cannot assume the operator to be always engaged, always informed and always ready to intervene and make correct decisions when required by the automation to do so.”

In many corners, the solutions to automation complacency and mode confusion are to develop more sophisticated automated systems. Cummings notes that the accident rate in U.S. commercial jet operations dropped from approximately 4 per million departures to 1.4 per million departures between 1959 and 2012, and automation undoubtedly played a role in that improvement.

On the ground, human error causes 94 percent of motor-vehicle crashes, according to federal regulators, a statistic that self-driving vehicle companies often tout as their raison d’etre. While automation can lead to net safety benefits, transportation incidents that involve human-machine interaction show it should not be considered a cure-all.

“People often mistake automation as a ‘remedy for error,’ ” said human-machine interaction researcher Liza Dixon. “But adding automation to a human-machine system doesn’t remove the error. It displaces it, either by creating new problems or by outsourcing the error to system designers. This is perhaps one of the greatest takeaways from the aviation domain which applies directly to automotive.”