Trust in virtual homologation relies not only on the transparency of the simulation process, but also on the quality of the underlying models, frameworks, and related tools. While several initiatives and research projects, such as prostep IVIP, INTACS, SetLEVEL, EVIDENT, and HAIViSH, have advanced the discussion on credibility, there remains a critical gap: no standardized metrics exist to assess the quality of simulation frameworks and models, particularly in sensor simulation.
The goal of this project is to start the discussion on a new standardized, machine-readable data format for storing and exchanging quality metrics for Simulation frameworks and models. The scope also includes defining an agreed, use-case–specific weighting of the respective metrics to emphasize features relevant to particular tasks.