汽车 > 无人驾驶博客



The Need for a Common Verification Methodology in Autonomous Driving Development

by Angelos Lakrintis | 9月 23, 2019

Despite being an integral part of system development flow, the verification requirements of modern autonomous driving development (excluding ADAS verification methods) are barely met by the current state-of-the-art technologies.

Technical model methods provide an effective paradigm for validation since they can provide a mathematical model as a guarantee of correctness. However, this could be totally different when one is trying to verify a whole autonomous driving platform including sensor fusion, data fusion, sensor calibration and autonomous driving scenario simulation verification.

On the 23rd of September 2019, Foretellix, an Israeli start-up company announced that it has opened its Measurable Scenario Description Language (M-SDL) to the ADAS and AV ecosystem and contributed the language concepts to the Association for Standardization of Automation and Measuring Systems (ASAM) standards committee. The company promotes M-SDL as “the first open language that addresses multiple shortcomings of today’s formats, languages, methods and metrics used to verify and validate vehicle safety”.

Foretellix also announced its M-SDL Partners Program, providing a mechanism for industry feedback and refinement of M-SDL. A partial list of members includes AVL List GmbH, Volvo Group, Unity Technologies, Horiba Mira Ltd, TÜV SÜD, Automotive Artificial Intelligence (AAI) GmbH, Metamoto Inc, Vector Zero Inc, Trustworthy Systems Lab of Bristol University, and Advanced Mobility Institute of Florida Polytechnic University.

As many industry experts have noted, safety methods and metrics based on quantity of miles driven in simulation and road testing, the number of disengagements, and/or traditional test coverage are insufficient, non-scalable, and not easily shared or reused.

M-SDL Ecosystem

In addition, due to the uncontrollable behavior of AVs and traffic, developers cannot be sure their tests are orchestrating desired scenarios or evaluating test coverage as intended. Finally, none of these techniques offer adequate mechanisms to identify previously unknown hazardous edge case scenarios nor aggregate coverage metrics across all virtual and physical testing platforms.

By opening and contributing M-SDL, tool vendors, suppliers, Tier 1s and developers will be able to

  1. Use a common, human readable, high level language to simplify the capture, reuse and sharing of scenarios,
  2. Easily specify any mix of scenarios and operating conditions to identify previously unknown hazardous edge cases, and
  3. Monitor and measure the coverage of the autonomous functionality critical to prove AV safety, independent of tests and testing platforms.

Foretellix is also providing its Foretify technology to suppliers and OEMs. The tool brings a proven coverage driven verification approach to the AV industry. It signals a move away from testing that focuses on Quantity of Miles, to a Quality of Coverage approach. Foretify will greatly help consumers, developers, insurance companies and regulators to collectively gain the quantifiable confidence in safety needed for the broad deployment of autonomous vehicles. More information on Foretellix’s Foretify technology can be found on this Strategy Analytics insight: Foretellix: Verification Practices for Autonomous Vehicles

Systems based on AI/Machine Learning are Hard to Verify and Regulate

Machine Learning, a sub branch of Artificial Intelligence is becoming very popular in the automotive, military, mobile phone and other emerging technology areas. Machine Learning’s advancements are quite impressive, but yet it is extremely hard to verify, let alone regulate them. Many complex systems nowadays are using machine learning techniques. For example, some of the image recognition logic attached to autonomous vehicle cameras uses deep learning and other Neural Network (NN) techniques, such as CNN (Convolutional Neural Networks), DNN (Deep Neural Networks), ANN (Adversarial Neural Networks) etc.

On the regulatory matter, in regards to automotive AI/ Machine Learning, there is an informal document from UNECE which looks at the issue of Artificial Intelligence and vehicle regulations. The document is called WP.29-175-21 and it was proposed for consideration in June 2018. More information on the UNECE paper can be found here: Artificial Intelligence and Vehicle Regulations

Machine Learning and especially Neural Networks present two main problems for verification:

  • The neural networks work impressively well, but it is really hard to understand how they really work, in the sense of “what are the rules they operate by”. Thus it is hard to reason about them and even harder to verify them, especially in the context of a bigger system which is not only neural networks.
  • The second problem is about inference and self-improvement of the neural network. This can be quite useful (for example an autonomous vehicle improves as it operates by sending data back to the datacentre for analysis and further error corrections). In this case there are some regulatory bodies (and hence manufacturers) who don’t support this solution because it makes verification harder. The regulatory bodies argue that the autonomous platform on the road will not be the same autonomous platform that was once verified inside the factory.

Although big companies like NVIDIA and Intel/Mobileye are pitching their own safety models for autonomous vehicles, it is yet unclear towards how one could verify those models. 

Verification methodologies are still in its infancy for autonomous driving development. Only time will tell if Foretellix will be the go to method for AV verification in the upcoming years.

Previous Post: Leave the Driving to Musk | Next Post: Auto Architecture: The Need for Speed
Leave a comment