The limitations were apparent. The caveats, instantaneous. But the effort to draw meaningful insights from raw numbers? Arduous.

When federal safety regulators released information this month related to nearly 400 crashes involving driver-assist systems and 130 more involving autonomous vehicles, they provided a snapshot of the incidents — but not much more.

The government reports were much anticipated. But in the end, their findings did not spell things out with much clarity.

“What NHTSA provided was a ‘fruit bowl’ of data with a lot of caveats, making it difficult for the public and experts alike to understand what is being reported,” said Jennifer Homendy, chair of the National Transportation Safety Board. “Independent analysis of the data is key to identifying any safety gaps and potential remedies.”

Calls came from many corners for better overall data, standardized data and further study of how these fledgling automated systems work in the real world.

But while driver-assist and autonomous driving systems are just starting to reach roads, some of that analysis is already underway.

For the past seven years, research scientist Bryan Reimer and his colleagues at the Massachusetts Institute of Technology’s Age Lab have been collecting data from vehicles equipped with advanced driver-assist technology. They are working with more than 25 companies that are members of MIT’s Advanced Vehicle Technology Consortium, an academic and industry group examining system performance and driver behavior with features such as Tesla’s Autopilot, Cadillac’s Super Cruise and more.

While there’s much focus on the absolute number of crashes reported to NHTSA — which is not useless on its own — those numbers lack necessary information about the geographies in which the crashes occurred, the number of vehicle miles traveled with driver-assistance tech activated, or how automakers are collecting and reporting the information. Even within that context, Reimer is circumspect about the value of crash information alone.

“Crashes and incidents are really rare events,” he said. “They’re outliers. That’s why we’re not looking at crashes. We’re looking at fundamental behaviors that lead to crashes.”

In September, an MIT study believed to be the first that used real-world driving data from driver-assist systems yielded nuanced understandings of some of those fundamental behaviors. For example, researchers found that with the Autopilot system enabled, Tesla drivers’ eyes strayed from the road more frequently and for longer periods compared with manual driving.

A separate study issued months earlier showed that human drivers using GM’s Super Cruise in Cadillac CT6 models were collaborating in unexpected ways with the driver-assist system, taking more active roles in the driving process than researchers anticipated when the system maintains control.

The work serves as a prelude to upcoming studies that will offer more apples-to-apples comparisons on the systems provided by various automakers, which is the sort of insight many initially hoped would be possible with the NHTSA data.

Reimer suggests that if crashes are merely a tip-of-the-iceberg indicator, further work will shed light on how drivers shift their attention between manual and assisted driving, how drivers put their hands on the wheel in systems marketed as hands-free vs. hands-on, how they retake control after periods where an assist system is engaged, and how automation affects speed and more.

“We desperately need to understand the denominators, the frequency of events and the behaviors underlying them to understand the benefits and limitations of automated and assisted driving,” said Reimer, who works within MIT’s Center for Transportation and Logistics on human-factor issues. “We need to understand which aspects are working well and which ones need refining.”

On an individual automaker basis, companies and their engineers are often well versed in the competence and subtleties of their own systems. Where they fall short is understanding how those compare to other developments, and how they practically function in the real world.

Developing and testing something such as a driver-assist system requires more complex testing.

“There’s lots of good radar out there, but it’s not about what components they use,” Reimer said. “It’s about the system and software working in the real world. What are the strengths of our system compared to everyone else? What are the weaknesses, and how do we improve?”

Bridging those gaps should be a priority for the auto industry, Reimer argues. Many companies in the self-driving realm have says that they should not compete on safety. Much like in the airline industry, one crash or incident reflects poorly on all manufacturers at an early stage of the technology.

In the driver-assist realm, such arguments have been less convincing. Both the conveniences offered by driver-assist systems — hands-on vs. hands-free, for example — and the safety of such systems have become differentiating points.

Reimer, perhaps one of a few in a position to make concrete apples-to-apples comparisons, makes a case both for that capability and for stronger collaboration across the industry.

“What’s important is that we are doing better tomorrow than we do today as we evolve, learn, adapt and implement systems,” he said. “We need to do what we can to leverage data we have to compare them in a more apples-to-apples format — not because the industry needs more stringent standards, but they need to learn from the data and accelerate from there.”