So-called Level 2, or partially automated driving technology, such as Tesla’s Autopilot and Cadillac’s Super Cruise can control acceleration, braking and steering, among other tasks. But the systems available on vehicles today don’t replace an attentive driver with eyes on the road.
Researchers at the Insurance Institute for Highway Safety in Arlington, Va., determined some of the designs — and names — of these advanced driver-assistance systems make it too easy for the driver to trust and heavily rely on the technology, resulting in driver disengagement and increasing the risk of a crash, according to the group’s findings.
As part of the report, on March 12 the institute released a set of safety guidelines for the design of these driver-assist systems, emphasizing a need for better methods of monitoring driver engagement and for regaining the driver’s attention when necessary.
“Let’s get this information out there based on our knowledge and what we’re seeing, and start a dialogue of how do we make sure that we’re not starting to create systems that are going to result in less safety,” David Harkey, IIHS president, told Automotive News.
Harkey spoke with Staff Reporter Audrey LaForest about the group’s guidelines for driver-assist systems, where the responsibility lies in safety and education about these systems and what the group sees as the next step for industrywide action. Here are edited excerpts.
Q: This month, you issued safety recommendations for driver-assist systems. Who is your audience for these recommendations?
A: There are different pieces of this that we need to convey to different audiences. It’s obviously the automakers and the component suppliers who are designing these systems and implementing them into their vehicles. We want to be sure that they understand how to ensure that they are providing a safe driving environment, and they are continuing to keep the driver engaged in the driving task. The other audience we’re trying to reach with this is consumers. There’s certainly a subset out there that thinks that some of these systems can do more than they are designed to do, and they are interpreting them to be self-driving in some ways. And so, getting consumers to better understand the limitations of these systems is part of what we’re trying to do as well. The third audience is the U.S. Department of Transportation, NHTSA, in particular, who continues to think about how we might regulate automated vehicles in the future.
Who should be responsible for ensuring the safety of these systems?
It’s a shared responsibility. I don’t think that you necessarily can put the burden on any one group. It goes all the way from product research and development to product implementation. Certainly regulation has a role to play in there from the DOT. Consumers have an obligation and a responsibility to understand how the systems work within their vehicles, and they need training. In a lot of cases, some of these things are a little more complex than what we have in vehicles today, and that’s going to continue to be a problem as we add more technology in the future. Making sure that dealerships are adequately able to train people — that’s going to be a big part of what’s needed in the future.
One of the real challenges is going to be there might not be a dealer for the person who’s buying this as a secondhand vehicle. How do you make sure the new owner is trained properly? We’re in a new era when it comes to vehicle technology and some of the systems that are on vehicles now, and we’ve got to figure out how to properly design, how to properly regulate and how to properly educate and train the consumer.
Last month, the National Transportation Safety Board’s hearing on a fatal Tesla Model X crash determined the driver was partially at fault because of distraction and overreliance on Autopilot.
There’s a lack of understanding on the part of consumers in some cases as to what these systems can and cannot do, and what you can and cannot do when it comes to being disengaged from the driving task. That was certainly evident there. And we’ve seen that in a survey we did last year on seeing if consumers understand what these systems can do, and what they think they can do, based on the name of the system. Autopilot by Tesla was the one where there is this misconception on the part of users that the system — that name, in particular — allows you to be disengaged, completely disengaged, from the driving task, and take a nap or sleep or whatever. That was a real concern from the survey data. One of the things that automakers need to do is to make sure that they are not doing things as simply naming their systems in a way that instills a false sense of confidence in the consumer.
Some systems allow drivers to have their hands off the wheel, too.
When you’re asking the driver to stay engaged in the driving task, which is a very fundamental rule of being a Level 2 system, yet you’re allowing them — for convenience’s sake — to take their hands off the wheel, but monitor and respond quickly if they need to intervene. That’s a mixed message that we think is very dangerous. Taking your hands off the wheel is not a safety feature. That is a convenience feature. We’re not convinced that that’s something that should be allowed in the first place.
What’s the next step?
On the regulatory side, we will continue to have conversations with NHTSA and hope that they will move forward on trying to do something positive in terms of regulation.
The other thing we do is we share our research with other safety partners who may be pushing for regulation and hope they will pick this up and run with it and try to influence some form of legislation in the future, as well. But I think the evidence is building.