Member: AKKA, Akkodis Germany Consulting GmbH (AKKA)
Featured Standard: ASAM OpenLABEL®, ASAM OpenSCENARIO®
The EE Test Solutions & Diagnostics department in AKKA GmbH & Co. KgaA are working closely with OEMs and suppliers to test, verify, and validate Advanced Driver Assistance Systems (ADAS) and Autonomous Driving functions. Testing is divided broadly into the laboratory and real-world tests. In the laboratory, there are static and dynamic testing methods. Real-world testing is either conducted on a proving ground or real-road. Laboratory tests include tests in a virtual environment or simulators (including reply, open-loop, and closed-loop). The research and development in autonomous driving and ADAS domain conduct reliability tests by naturalistic field operational tests by covering the number of test kilometers. Such tests generate Big Data.
The way to achieve the Autonomous Driving technology to reach the global market is by standardization of methods and approaches to support development, testing, and Validation throughout the automotive industry. By the approach of strong partnerships with the industry and academia, evaluating the next technology demands and trends, ASAM Standards such as OpenScenario 2.0 and upcoming OpenLABEL could be an opening door to the next step for future of testing, Verification, Validation, and development. Hence I support ASAM.
Harsha Jakkanahalli Vishnukumar (Managing Director and CEO AKKA Technologies India)
The challenge with highly automated driving is to ensure functional safety. With level 3 automation according to BASt or level 4 of the SAE classification, conventional Validation strategies for driver assistance systems reach their limits. Since the driver no longer has to be at disposal for fallback control. Consequently, all conceivable situations must be controllable by the automated driving system, which means that the requirement specifications to be fulfilled by the system are no longer fully available or known. Hence the current release practice with mainly real-world tests is no longer applicable for automated driving functions. For this reason, future trials are increasingly being shifted from real-world to virtual in simulated platforms. The transfer of test activities into a virtual environment poses a challenge due to real-world virtualization, accurate sensor modeling, manual test case or scenario creation without complete requirement specifications for highly automated driving.
In real-world testing, the N-FOT (Naturalistic Field Operational Tests) approach alone can not cover all critical driving situations and scenarios in the real-world. Though it offers realism and unique scenarios from the real-world, such tests can be time-consuming, resource-intensive, and dangerous. Hence the scenario-based virtual testing approaches are used. Scenarios can be used in the entire development process. Scenarios can either be manually created or extracted from recorded data from the real-world. Manual creation of scenario requires Specification requirements, which is not fully available in highly automated driving. On the other hand, one needs to think about a scenario to create them. The state-of-the-art approach to reuse the data gathered from the real-world is pre-processed for use in calculating the system-under-test's key performances, conducting replay tests, and using the data to create test cases and scenarios. The use of recorded Big Data (which is usually in petabytes) is resource-intensive. Classical computer programs and classical data mining techniques can extract only a fraction of required data since it is challenging to program all required characteristics of scenarios. Methods to use Big Data and to extract its fifth 'V' is achieved today by manual labeling. This is done to make the data usable. Manual labeling of time-irrelevant (objects, segments in individual frames) and time-relevant (sequences, scenarios) is not a viable option due to the inconsistency, time and resource-intensive nature and quality, since state-of-the-art average human performance in recognition and annotation/labeling have remained the same. For these reasons, researchers concentrate on generating scenarios automatically and extracting them from real-world recorded data. Based on the state of the art research and automated scenario generation techniques, valid scenarios found after automated generation are comparatively sparse. Though scenario-based approach promises a systematic way for testing ADAS and autonomous vehicles, possessing valid and meaningful scenarios poses a major challenge.
As a summary, three main challenges can be derived:
AI-Core offers solutions to address the challenges in the test, Verification, and Validation of ADAS and Autonomous Driving systems. The existing or newly generated real-world raw data is considered here. A group of deep neural networks (DNN) within AI-Core will extract the time-irrelevant information (labels). The extracted information is again fed into a Recurrent DNN to extract abstract scene related information. The extracted information can be transformed into ASAM's OpenLABEL format in the future for use in various applications. With the help of real-world raw data and extracted information, the data is again fed to a set of AI-Core-based DNNs to extract scenarios. The scenarios here contain both static and dynamic information from the surrounding environment, and hence ASAM’s OpenSCENARIO format can be used here. The techniques used to detect and extract scenarios additionally detect novel scenarios in the available data without requiring additional training process for the model-based algorithms used in AI-Core.. The extracted labels and scenarios can be further used for KPI analysis of system or function-under-test, open-loop tests, replay, and XIL tests. Though the real-world data covers real-life scenarios, but critical scenarios are very sparse. AI-Core offers two novel approaches, one method is using latent space vectors and another using multiple reinforcement learning approaches. Both methods can be used to generate scenarios concentrating on the boundary conditions, failures, and hotspots where the system-under-test could potentially fail. The created new scenarios contain all information (and hence ASAM’s OpenSCENARIO compatible) in order to be replicated in a virtual environment for testing. To complete the whole cycle, we extend the critical and mandatory scenario tests to real-world using our own Automated Testing Ground (ATG) platform, where vehicles are driven by driving robots or drive by wire technology together with automated surrounding actors in a closed proving ground.
All the scenarios extracted and newly created by AI-Core can be reused in iterative automated testing and testing during development to gain qualitative and quantitative insights of maturity of system-under-test. ASAM OpenLABEL and OpenSCENARIO standards can be used throughout the iterative testing cycle as shown in the Figure below (Derived from Dissertation work of M.Eng. Harsha Jakkanahalli Vishnukumar).