Disclaimer

This document is the copyrighted property of ASAM e. V. In alteration to the regular license terms, ASAM allows unrestricted distribution of this standard. ยง2 (1) of ASAM’s regular license terms is therefore substituted by the following clause: "The licensor grants everyone a basic, non-exclusive and unlimited license to use the standard ASAM OpenLABEL".

1. Foreword

ASAM e. V. (Association for Standardization of Automation and Measuring Systems) is a non-profit organization that promotes standardization of tool chains in automotive development and testing. Our members are international car manufacturers, suppliers, tool vendors, engineering service providers and research institutes. ASAM standards are developed by experts from our member companies and are thus based on real use cases. They enable the easy exchange of data or tools within a tool chain. ASAM is the legal owner of these standards and responsible for their distribution and marketing.

ASAM standards define file formats, data models, protocols and interfaces for tools that are used for the development, test and validation of electronic control units (ECUs) and of the entire vehicle. The standards enable easy exchange of data and tools within a tool chain. They are applied worldwide.

2. Introduction

2.1. Overview

ASAM OpenLABEL standardizes the annotation format and the labeling methods for multi-sensor data streams and scenario files. Using a standardized format helps cut costs and save resources used in creating, converting, and transferring annotated and tagged data. ASAM OpenLABEL is represented in a JSON format and can therefore be easily parsed by tools and applications.

ASAM OpenLABEL specifies the different labeling methods that can be applied to multi-sensor data streams, for example, 2D bounding boxes for image data. With ASAM OpenLABEL, several labeling methods are provided which enable users to label common data streams, such as images or point clouds. Besides adding labels to multi-sensor data streams (labeling), ASAM OpenLABEL also provides methods to add tags to scenarios (tagging). These tags can be used to categorize scenarios and make them searchable in large databases. They can also provide additional information about the individual scenario, such as who captured or created the scenario, and with what setup was the scenario captured.

ASAM OpenLABEL provides a common data structure for organizing annotations for labeling multi-sensor data streams and tagging simulation and test scenarios.

2.1.1. Multi-sensor data labeling

For the development, testing, and validation of highly automated driving functions, the industry makes extensive use of Machine Learning (ML), especially for realizing perception and prediction tasks. Machine learning requires significant amounts of training data. The data has to be annotated and enriched with metadata to be useful in the training and validation phases.

The lack of an industry standard aligning the structure and organization of these annotations creates several difficulties:

  • It limits the reuse of annotated datasets.

  • It poses challenges regarding the maintenance and updating of the annotations.

  • It limits the sharing of datasets across the industry and between industry and academia.

  • It has a negative impact on the quality of annotations.

The goals of the multi-sensor data labeling use case in ASAM OpenLABEL are as follows:

  • Enable efficient sharing of annotated perception datasets and object lists.

  • Increase the overall quality of annotations by providing a common data structure for annotations.

  • Improve the maintainability and reuse of annotated datasets.

The multi-sensor data labeling use case in ASAM OpenLABEL fulfills the requirements of the following main target groups:

  • Perception/computer-vision engineers

  • Machine-learning engineers

  • Perception/computer-vision research scientists

  • Machine-learning research scientists

  • Data-annotation engineers

  • Data-annotation analysts

  • Test engineers

2.1.2. Scenario tagging

Scenario databases storing multi-sensor data, annotated multi-sensor data, simulation scenarios, and test scenarios can be very extensive. The sensor data and scenarios stored in these databases must be organized and tagged using semantic, meaningful tags. These tags refer, for example, to the content of the data, its ODD, the high-level behavior of the dynamic agents, and administrative information. Extracting the information required for the tags from scenario artifacts can be difficult and inefficient, and for some types of data it is impossible. This is due to the fact that the scenario definition language used is limited. Scenario tagging based on ASAM OpenLABEL addresses these issues.

The goals of the scenario tagging use case in ASAM OpenLABEL are as follows:

  • Enable standardized clustering of test scenarios in scenario databases.

  • Facilitate scenario storage systems that are separate to scenario definition representation.

  • Enable efficient search and filtering of test scenarios in scenario databases.

  • Enable sharing of information on test scenario categories and clusters between different databases or owners.

  • Facilitate the sharing of scenarios between systems that may not have the ability to inspect the scenario definition or underlying scenario data.

  • Improve maintainability and reuse of test scenarios and scenario data.

  • Enable and enhance machine-learning training and validation datasets with additional information to organize the datasets.

  • Enable specific machine-learning classification tasks to be performed on scenario data.

The scenario tagging use case in ASAM OpenLABEL fulfills the requirements of the following main target groups:

  • Systems engineers

  • Validation and verification engineers

  • Functional-safety engineers

  • Simulation specialists

2.1.3. Deliverables

2.2. Conventions and notation

2.2.1. Naming conventions

The following conventions apply in this document:

  • Element names should be meaningful names with defined semantics.

  • Element names should be written in camel case, ascii strings.

  • The first character shall be a letter, an underscore, or a dollar sign ($).

  • Subsequent characters may be a letter, a digit, an underscore, or a dollar sign.

  • Reserved JavaScript keywords should be avoided.

  • All element names should be uniquely defined in one ontology.

2.2.2. Units

Unless stated otherwise, all numeric values within this specification are in SI units. Table 1 represents details of the units used.

Table 1. Units
Unit of Unit Symbol

Length

Meter

m

Duration, (relative) time

Second

s

Speed

Meters per second

m/s

Mass

Kilogram

kg

Angle

Radians

rad

Light intensity

Lux

lx

Image coordinate

Pixel

px

Timestamp

The timestamp used in labeling depends on the raw sensor data. Different sensors sample data with various timestamp formats:

  • UT (Universal Time): UT is derived from the rotation of the Earth. With the improvement of measurement, UT has several versions: UT0, UT1, UT2. UT time scale is irregular, since the rotation rate of the Earth is not constant.

  • TAI (Temps Atomique International): TAI is the international atomic time scale based on a continuous counting of the SI second. It is provided by several laboratories around the world. The instruments "producing" TAI are ensembles of atomic frequency standards, such as rubidium oscillators, cesium oscillators, and hydrogen masers. TAI was set to coincide exactly with UT1 (universal Time version 1) at 0 hours of 1 January 1958.

  • UTC (Universal Time Coordinated): UTC was introduced for the purpose of having a time with a constant scale but not deviating too much from UT1. UTC has the same time scale as TAI. A leap second is introduced into UTC once the difference between UT1 and UTC is longer than 0.9s.

The time reference of many GNSS (Global Navigation Satellite System) systems are based on the time scale of UTC and TAI with a specific constant offset [1].

  • GPST (GPS Time) [2]: GPST is based on TAI as provided by the frequency standards of the GPS control center. It was introduced at 0 hours on 6 January 1980 (UTC) and always has a constant offset of -19s to TAI.

  • GST (Galileo System Time): GST is a continuous time scale maintained by the Galileo Central Segment and synchronized with TAI. GST started from 0 hours on 22 August 1999 (UTC) and the offset between GST and TAI is -13 seconds.

  • GLONASST (GLONASS Time) [3]: GLONASST is generated by the GLONASS Central Synchroniser and is synchronized with TAI. The constant offset between GLONASS and UTC (SU) is three hours.

  • BDT (BeiDou Time): BDT is a continuous time scale starting at 0 hours on 1 January 2006 (UTC). It is synchronized with UTC (BSNC). The constant offset to TAI is -33 seconds.

The following overview shows how different timestamp standards can be transformed:

  • UTC = TAI - LS

  • GPST = UTC(USNO) + LS - 19s

  • GST = TAI - 13s

  • GLONASST = UTC(SU) + 3h

  • BDT = UTC(BSNC) + LS - 33s

fig gnss time system and utc
Figure 1. The relationship between GNSS time systems and UTC

Figure 1 shows the relationship between GNSS time systems and UTC. It was derived from Timescales [4].

Unix time is widely used in operating systems. It is the number of seconds that have elapsed since the Unix epoch, not counting UTC leap seconds. The Unix epoch started at 00:00:00 UTC on 1 January 1970. Every day is treated as if it contains exactly 86,400 seconds. Due to its handling of leap seconds, it is not a linear representation of UTC.

Representation of date and time format

The representation of data and time format is specified by the ISO 8601 standard [5]. The following format pattern is used:

yyyy-MM-ddTHH:mm:ss.FFFZ

Here, T is used as time designator. . is used as separator for the following millisecond portion. An explanation is given in the table below:

Table 2. Date and time formats
Specifiers Meaning Example

yyyy

Year (four digits)

2021

M,MM

Month in year (without/with leading zero)

9, 09

d,dd

Day in month (without/with leading zero)

3, 03

H,HH

Hours, 0-23 count (without/with leading zero)

7, 07

m,mm

Minutes (without/with leading zero)

2, 02

s,ss

Seconds (without/with leading zero)

4, 04

F,FF,FFF

Milliseconds (without/with leading zeros)

357, 04, 002

Z

RFC 822 time zone shifted to GMT

Z, +0100

If the time is in UTC, add a Z character directly after the time without a space. Z is the zone designator for the zero UTC offset. For example, 11:45 UTC is represented as 11:45Z or T1145Z.

If the time is in time zone other than UTC, the UTC offset is appended to the time in the same way that Z was above, in the form ยฑ[hh]:[mm], ยฑ[hh][mm], or ยฑ[hh].

At a given date and time of 2021-09-03 11:23:56 in the Central European Time zone (CET), the following standard-format output is produced:

2021-09-03T11:23:56.000+0100

2.2.3. Modal verbs

To ensure compliance with the ASAM OpenLABEL standard, users need to be able to distinguish between mandatory requirements, recommendations, permissions, as well as possibilities and capabilities.

The following rules for using modal verbs apply:

Table 3. Rules for using modal verbs
Provision Verbal form

Requirements
Requirements shall be followed strictly in order to conform to the standard. Deviations are not allowed.

shall
shall not

Recommendations
Recommendations indicate that one possibility out of the several available is particularly suitable, without mentioning or excluding the other possibilities.

should
should not

Permissions
Permissions indicate a course of action permissible within the limits of ASAM OpenLABEL deliverables.

may
need not

Possibilities and capabilities
Verbal forms used to state possibilities or capabilities, whether technical, material, physical, etc.

can
cannot

Obligations and necessities
Verbal forms used to describe legal, organizational, or technical obligations and necessities that are not regulated or enforced by the ASAM OpenLABEL standard.

must
must not

2.2.4. Typographic conventions

This documentation uses the following typographical conventions:

Table 4. Typographical conventions
Mark-up Definition

Code elements

This format is used for code elements, such as technical names of classes and attributes, as well as attribute values.

Terms

This format is used to introduce glossary terms, new terms and to emphasize terms.

2.2.5. Use of IDs

The following rules apply to the use of IDs in ASAM OpenLABEL:

  • IDs shall be unique within a class.

3. Scope

ASAM OpenLABEL establishes the basic principles and methods for annotating multi-sensor data streams and for tagging test scenarios for automated driving development, validation, and verification.

The ASAM OpenLABEL standard

  • specifies the annotation schema to which valid ASAM OpenLABEL annotation instances shall conform.

  • represents the annotation schema for ASAM OpenLABEL in JSON schema. The JSON schema defines the structure, sequence, elements, and values of ASAM OpenLABEL.

  • explains relationships between different elements in the ASAM OpenLABEL annotation schema, for example, actions, objects, events, contexts, relations, frames, tags.

  • gives guidelines for using ASAM OpenLABEL.

This version of ASAM OpenLABEL does not discuss quality nor provide quality criteria related to annotations. Future versions of ASAM OpenLABEL may deal with this issue.

3.1. Multi-sensor data labeling

The ASAM OpenLABEL standard

  • defines and organizes the annotation data structures, including geometries, coordinate systems and transforms, and other concepts relevant to spatiotemporal annotations for multi-sensor data labeling.

  • does not provide a taxonomy/ontology of physical/abstract entities relevant to the road traffic domain. Instead, it specifies mechanisms to include external knowledge repositories/ontologies and recommends the use of ASAM OpenXOntology as the ontology of reference.

  • does not provide rules, specifications, or guidelines on how to annotate entities for multi-sensor data labeling. Nor does it provide any recommendations as to what elements of a physical entity should be included or not included in a geometry.

An ASAM OpenLABEL multi-sensor data labeling instance shall follow the provided multi-sensor data labeling schema to be considered valid and compliant with ASAM OpenLABEL.

3.2. Scenario tagging

The ASAM OpenLABEL standard

  • defines and organizes the annotation data structure for test scenario tagging.

  • defines the set of ASAM OpenLABEL tags, their relationships, and the mechanisms to include the ASAM OpenLABEL set of scenario tags in valid annotation instances of test scenarios.

  • does not define a language or format to describe test scenarios.

An ASAM OpenLABEL scenario tagging instance shall use the tagging schema and the set of tags provided in ASAM OpenLABEL to be considered valid and compliant with ASAM OpenLABEL.

4. Normative references

The following documents are referred to in the text in such a way that some or all of their content constitutes some of the requirements set out in this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

  • ASAM OpenDRIVE 1.7.0 [6]

  • ASAM OpenSCENARIO 1.1.0 [7]

  • ASAM OpenSCENARIO 2.0.0 [8]

  • ASAM OpenXOntology 1.0.0 [9]

  • BSI PAS 1883 [10]

  • ISO 8601 [5]

  • ISO 8855 [11]

  • SAE J3016 (2021) [12]

5. Terms and definitions

AD (Autonomous Driving)

Non-abbreviated form: Autonomous Driving

ADAS (Advanced Driver Assistance System)

Non-abbreviated form: Advanced Driver Assistance System

Annotation (process)

Process of enriching raw data, for example, test scenario artifacts or data streams from multiple sensors, such as cameras, LiDARs, and radars with metadata. This metadata describes the content of the raw data, for example, static or dynamic objects populating a video, actions that are performed, or environmental conditions. Additional information regarding the data may also be included. Already enriched data can be enriched even further as well.

Annotation instance

Enriches raw data with metadata required for the specific task. Annotation instances are usually serialized in a text-based file format, for example, JSON. Annotation instances have to conform to a pre-defined annotation schema.

Annotation instance format

File format for serialization and storage of annotation instances. ASAM OpenLABEL uses JSON as annotation instance format.

Annotation schema

Provides structure and constraints for annotation instances. Annotation instances shall adhere to the schema to be considered well-formed and valid. The definition of an annotation schema is the core of ASAM OpenLABEL.

Annotation schema format

File format for serialization and storage of an annotation schema. ASAM OpenLABEL uses JSON schema as annotation schema format.

Knowledge repository

Database that stores, organizes, and categorizes knowledge. In the context of ASAM OpenLABEL, knowledge repositories organize, structure, and define domain concepts relevant to the annotation task, for example, the road traffic domain. Knowledge repositories may be defined, for example, as free texts, structured taxonomies, or formal ontologies.

Labeling

Process for generating spatiotemporal descriptions for data, using labeling geometries and other constructs to provide richer information compared to tags.

Labeling is a specialization of Annotation.
Labeling geometries

Spatiotemporal constructs used to identify, isolate, and localize specific semantic concepts to be annotated in the raw data, for example, bounding boxes, cuboids, and others.

LiDAR (Light Detection and Ranging)

Restricted term: LIDAR

Method for measuring distances by illuminating the target with laser light and measuring the reflection with a sensor.

ODD (Operational Design Domain)

Source: SAE J3016 (2021) [12]

Operating conditions under which a given driving automation system or feature thereof is specifically designed to function, including, but not limited to, environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics.

Ontology

Formal, explicit specification of a shared conceptualization. Ontologies may be defined in formal knowledge representation languages. In the context of ASAM OpenLABEL, an ontology is a machine-readable artifact that organizes and defines semantic concepts relevant to the labeling tasks.

Radar (Radio Detection and Ranging)

Restricted term: RADAR

Device or system that consists of a synchronized radio transmitter and receiver that emits radio waves and processes their reflections for display. A radar is used especially for detecting and locating objects.

Raw data

Data that can be enriched with metadata. Raw data may take many forms, for example, individual files, file streams, or test scenarios artifacts. Relevant examples of raw data for ASAM OpenLABEL are png images, frames in a video sequence, pcd point clouds, OpenSCENARIO files, and OpenLABEL files themselves.

Tagging

Process for adding simple and complex semantic tags to any information container, such as images, videos, or test scenarios.

Tagging is a specialization of the annotation process.
Test scenario

Scenario intended for testing and assessment of Advanced Driver Assistance Systems (ADAS) and system under test.

6. Conceptual overview

6.1. Data annotation in ASAM OpenLABEL

Data annotation is the process of enriching raw data, for example, data streams from multiple sensors, such as cameras, LiDAR, radar, or test scenario artifacts with additional metadata. These metadata are related to the content of the raw data, for example, static or dynamic objects populating a video, actions they are performing, or environmental conditions. Additional information regarding the data may also be included.

fig overview data annotation.drawio
Figure 2. Relevant concepts for data annotation

Figure 2 shows the concept and terms related to data annotation.

Raw data is data that can be enriched with metadata. Raw data can take many forms, for example, individual files, file streams, or test scenario artifacts. Relevant examples of raw data for ASAM OpenLABEL are png images, frames in a video sequence, pcd point clouds, or OpenSCENARIO files.

Annotation instances enrich raw data with metadata required for the specific task. Annotation instances are usually serialized in a text-based file, for example, JSON. JSON is the format used for ASAM OpenLABEL. Annotation instances shall conform to a predefined annotation schema.

The annotation schema provides the specific structure and set of constraints that the annotation instances need to follow to be considered well-formed and valid. The definition of an annotation schema is the core of ASAM OpenLABEL. The annotation schema for ASAM OpenLABEL is represented as a JSON schema.

For applications with heavy semantic load, such as the use cases relevant for ASAM OpenLABEL, it is advisable to refer to external knowledge repositories, for example, ontologies or vocabularies. An annotation schema regulates the data validity of annotation instances providing its data model. Knowledge repositories can add value to this: They provide information about the content of the annotations and analyze the validity of the content. Such external resources organize, structure, and define the semantics of the entities that annotations are referring to. Ontologies additionally define the relationships between the entities. ASAM OpenLABEL assumes the use of external knowledge repositories to organize the semantic content of annotations.

ASAM OpenLABEL defines annotation schemas that are valid for specific use cases with specific raw data to be annotated. The two primary use cases considered for ASAM OpenLABEL are multi-sensor data labeling and scenario tagging.

6.1.1. Multi-sensor data labeling

fig overview multi sensor data labeling.drawio
Figure 3. Multi-sensor data labeling concept

Figure 3 shows the concepts related to data annotation as representation for multi-sensor data labeling. ASAM OpenLABEL covers the definition of the annotation schema for multi-sensor data labeling.

Multi-sensor data labeling use cases focus on raw data that is the output of multiple sensors, for example, cameras, LiDAR, or radar. These sensors equip typical advanced driver assistance systems (ADAS) and autonomous driving (AD) systems. The format of such raw data is often pcd, png, other common image formats, point cloud, or video formats.

For this type of raw data, there is lots of semantic content that has to be annotated. The annotations require geometries, for example, bounding boxes, polygons, or other primitives to isolate and localize relevant semantic concepts within the raw data. Semantically, labels usually refer to agents type identification, their relations, actions they are performing, and contexts in which these actions or agents take place or exist.

Additional information included in this annotation use case encompass details about spatial calibration across sensors, temporal synchronizations, coordinate transforms, and consistent entity IDs across frames and sensor streams.

Example

fig example multi sensor data labeling.drawio
Figure 4. Multi-sensor data labeling example

Figure 4 shows an example using ASAM OpenLABEL for multi-sensor data labeling. The files example.pcd, example.png and example.json contain multiple raw sensor data streams that are annotated according to the ASAM OpenLABEL annotation schema. The ASAM OpenLABEL annotation schema is contained in the openlabel_json_schema.json file. The example.json file contains annotations of the example.pcd, example.png and example.json files. The annotations in the example.json file contain references to an external ontology in the example.owl file. The example.json file can be validated using the openlabel_json_schema.json file. The example.owl file is used to semantically enrich the annotations in the example.json file.

6.1.2. Scenario tagging

fig overview scenario tagging.drawio
Figure 5. Scenario tagging concept

Figure 5 shows the concepts related to data annotation as representation of scenario tagging. ASAM OpenLABEL covers the definition of the annotation schema for scenario tagging and an ontology for tags.

Scenario tagging use cases focus on raw data that is used in the development, testing, and validation process of ADAS and AD functions, for example, test scenarios or simulation scenarios. Often the format of such raw data is OpenSCENARIO, GEOscenario, M-SDL, or other domain specific languages or formats used to describe and store simulation and test scenarios.

In addition to the raw data types mentioned above, videos, natural language descriptions, or any other data that contains a visualization or a description of a driving situation evolving through time, and so even valid OpenLABEL annotation instances for multi-sensor data labeling, can be treated as relevant raw data for the scenario tagging use case.

Annotations for this type of data usually are not semantically dense, and consist of a set of tags that are associated with a specific (set of) scenario instance(s). Semantically, tags usually refer to elements related to the content of the scenario, such as its ODD, or the behavior of some agents.

Additional information included in this annotation use case encompass details about authorship, versioning, and other high-level administrative information related to the scenario.

Example

fig example scenario tagging.drawio
Figure 6. Scenario tagging example

Figure 6 shows an example using ASAM OpenLABEL for tagging scenario files. The example.xosc file contains a scenario description that was annotated following the ASAM OpenLABEL annotation schema. The ASAM OpenLABEL annotation schema is contained in the openlabel_json_schema.json file. The annotations of the example.xosc file are contained in the example.json file. The annotations in the example.json file contain references to an external ontology in the openlabel_ontology_scenario_tags.ttl file. The example.json file can be validated using the openlabel_json_schema.json file. The openlabel_ontology_scenario_tags.ttl file is used to semantically enrich the annotations in the example.json file.

6.2. Annotation schema and its format

The annotation schema defines the structure of annotations, data types, and conventions needed to unambiguously interpret the annotations. It also specifies how the annotation data is encoded for storage into computer files.

The annotation schema of ASAM OpenLABEL is designed to be flexible enough to tackle annotation tasks, ranging from simple object-level labeling in single images, using, for example, bounding boxes or semantic segmentation, to complex multi-sensor data labeling tasks, involving, for example, cuboids, odometry, coordinate systems, and transforms. The annotation schema and its format (JSON schema) is also designed to facilitate serialization of labels in files or messages that can be stored and exchanged between computers and stay readable for humans at the same time.

6.2.1. Annotation schema (JSON schema)

The annotation schema is described and formatted as a JavaScript Object Notation schema (JSON schema). It defines the shape against which valid JSON annotation instances should conform to. The structure of ASAM OpenLABEL annotation schema is serialized in the ASAM OpenLABEL JSON schema file. The annotation schema itself conforms to the JSON schema Draft 7 specification [13].

There are several software packages in different programming languages that can be used to validate a JSON payload against the JSON schema. A JSON schema validation asserts constraints on the structure of the instance JSON data.

The JSON schema validation only inspects the structure and type of the key-value pairs. A JSON schema does not validate the semantics behind the content of key-value pairs. Certain level of semantic validation can be achieved by using external resources, such as the ontologies of ASAM OpenXOntology, reasoning engines, and validation scripts.

The annotation schema data structure of ASAM OpenLABEL represents annotations as a dictionary. Therefore, all data is represented as key-value pairs. These key-value pairs are sometimes referred to as items in certain programming languages. Keys are strings, that is, arrays of characters. Values can be the following:

  • Primitives (string, number, and Boolean)

  • Arrays of primitives

  • Dictionaries

  • null (A special type to denote the key exists but has no value.)

Keys, as strings, encode either keywords defined in the JSON schema, for example object, coordinate_system, name, type, or identifiers. Identifiers can be numerical, for example 0, 5, strings, for example CAM, ODOM, or unique identifiers, for example, 123e4567-e89b-12d3-a456-426614174000. The JSON schema determines which pattern keys shall follow for different types of items, for example, regular expressions to determine that keys shall be string representations of numbers from 0 to 9.

This data structure matches with the syntax of JSON data formatting. As a consequence, ASAM OpenLABEL content can be expressed as JSON strings and made persistent as JSON files.

JSON payloads and files

Any ASAM OpenLABEL annotation instance can be expressed as JSON string payloads. That means the actual data pack that contains the key-value pairs is expressed as a string.

A JSON file, for example, openlabel_annotation.json, can be created by storing the JSON string payload using any computer programming language that serializes it into a text file. In ASAM OpenLABEL, UTF-8 (8-bit Unicode Transformation Format) shall be used as the encoding format of characters.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
    "openlabel" : {
        "metadata" : {
            "schema_version" : "1.0.0"
        },
        "objects" : {
            "0" : {
                "name" : "object1",
                "type" : "car"
            },
            "1" : {
                "name" : "object2",
                "type" : "pedestrian"
            }
        }
    }
}
JSON data can be shown clearly arranged using tabular spaces. Nevertheless, other representations are equally valid. They are preferred for reducing the size of the JSON files. See, for example, the above code: {"openlabel":{"metadata":{"schema_version":"1.0.0"},"objects":{"0":{"name":"object1","type":"car"},"1":{"name":"object2","type":"pedestrian"}}}}
JSON parsers

Any JSON parser application, package, and programming language can be used to interpret (parse) the content.

Example languages and libraries supporting reading and writing JSON data and validating the JSON schema are, for example, Python, Typescript/JavaScript, and C++.

It is out of the scope of this standard to define reference implementations of parsers to load and save JSON data compliant with the JSON schema.

Other encoding formats

The ASAM OpenLABEL format matches the syntax of JSON. It was originally developed using the JSON schema as the main pillar to define the structure. Therefore, this version of ASAM OpenLABEL enforces the utilization of JSON as an annotation and file format.

Nevertheless, other encoding formats may be considered for future versions of ASAM OpenLABEL as long as they satisfy the same structure, type, and constraints requirements defined by the JSON schema.

6.2.2. Structure

fig openlabel format highlevel.drawio
Figure 7. ASAM OpenLABEL high-level annotation structure

Figure 7 shows the high-level structure of the ASAM OpenLABEL annotation schema. ASAM OpenLABEL can be used for labeling and tagging.

Labeling focuses on producing spatiotemporal descriptive information of data, such as images. Objects, actions, events, contexts, and relations provide flexibility and complex labels.

Tagging aims to provide mechanisms to add simple and complex tags to any content, such as images, data files, or scenarios.

Additional structures provide details for metadata, ontologies, frames, and coordinate systems.

The following list shows all objects used in ASAM OpenLABEL.

JSON schema

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
    "openlabel" : {
        "properties": {
            "actions": {...},
            "contexts": {...},
            "coordinate_systems": {...},
            "events": {...},
            "frame_intervals": {...},
            "frames": {...},
            "metadata": {...},
            "objects": {...},
            "ontologies": {...},
            "relations": {...},
            "resources": {...},
            "streams": {...},
            "tags": {...}
            }
        }
    }
}

The annotation schema format is represented in the ASAM OpenLABEL JSON schema. The main object is the openlabel object. It contains the basic objects used in ASAM OpenLABEL. Some objects are utilized in both multi-sensor data labeling and scenario tagging use cases, for example, the metadata and ontologies objects. Some other objects are exclusively used in one and not the other of the two use cases.

The following list shows all objects used in the domain of multi-sensor data labeling.

JSON schema

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
{
    "openlabel" : {
        "properties": {
            "actions": {...},
            "contexts": {...},
            "coordinate_systems": {...},
            "events": {...},
            "frame_intervals": {...},
            "frames": {...},
            "metadata": {...},
            "objects": {...},
            "ontologies": {...},
            "relations": {...},
            "resources": {...},
            "streams": {...}
        }
    }
}

The following list shows all objects used in the domain of scenario tagging.

JSON schema

1
2
3
4
5
6
7
8
9
{
    "openlabel" : {
        "properties": {
            "metadata": {...},
            "ontologies": {...},
            "tags": {...}
        }
    }
}

The specific annotation schema for multi-sensor data labeling and scenario tagging, including detailed descriptions of each object, can be found in each corresponding section.

Related topics

6.3. Metadata

In ASAM OpenLABEL, metadata is understood as additional information about the labels and the content to be labeled. Examples for metadata are the ASAM OpenLABEL version used, file version, authorship, or any other custom information.

The information inside metadata shall be used for informative purposes by applications or humans managing ASAM OpenLABEL files.

Class

metadata

This JSON object contains information, that is, metadata, about the annotation file itself.

Additional properties:

true

Type:

object

Diagram
Figure 8. Diagram of the metadata class
Table 5. Properties of the metadata class
Name Type Required Description

annotator

string

Name or description of the annotator that created the annotations.

comment

string

Additional information or description about the annotation content.

file_version

string

Version number of the OpenLABEL annotation content.

name

string

Name of the OpenLABEL annotation content.

schema_version

string

true

Version number of the OpenLABEL schema this annotation JSON object follows.

tagged_file

string

File name or URI of the data file being tagged.

6.4. Coordinate systems

This section contains concepts that are relevant for multi-sensor data labeling use cases.

A coordinate system is a system of numbers, that is designed as a way to uniquely determine the position of points over a manifold, such as the Euclidean space, for example, the 2D position of a pixel within an image, or the 3D position of a LiDAR return point in the world relative to the rear axle of the vehicle.

A coordinate transform or coordinate transformation is a relation that expresses the mapping from coordinates on one coordinate system to coordinates in another coordinate system. A coordinate transform always requires two coordinate systems containing the source and the target coordinate system.

Raw data to be annotated with ASAM OpenLABEL may contain multiple streams of sensor data coming from different exteroceptive and interoceptive sensors. This triggers the need to define multiple coordinate systems and several transforms that express the following:

  • How data from different sensors are spatiotemporally related.

  • How the labels relate to the sensor data.

  • How the sensor data relates to the real world.

ASAM OpenLABEL defines mechanisms to represent information about coordinate systems and transforms in the annotation schema.

More specifically, coordinate systems and their transforms fulfill the need to express spatiotemporal relation for the following, non-exhaustive, set of use cases:

  • Express how the labeled objects of interest are spatially located with respect to a GNSS/INS system, to map data, or to other sensors.

  • Express how light rays generated intensity values.

  • Express how LiDAR points are geolocated with respect to the world coordinates, vehicle coordinates, etc.

  • Express the intrinsic calibration parameters of a camera sensor.

  • Express the distortion coefficients from fish-eye camera lenses to rectified images.

To accommodate for all these and more potential use cases, the ASAM OpenLABEL standard provides a method to describe an arbitrary number of coordinate systems and a method to describe the transforms between those coordinate systems.

In addition, the ASAM OpenLABEL standard provides a way to describe transforms that are fixed over time, transforms that vary occasionally at specific time instants, frames, and transforms that vary continuously.

As specified in section Coordinate systems, users may define arbitrary names for coordinate systems. However, despite the ability to describe an arbitrary set of coordinate systems, a small set of names is reserved and refers to pre-defined coordinate systems specified in ASAM OpenLABEL as there are some coordinate systems that are commonly used in many systems and are standardized. The coordinate systems with standardized namespaces include:

  • vehicle-iso8855

  • odom

  • map-UTM

  • geographic-wgs84

Whenever these names are used for a coordinate system, they shall have the meaning defined in the related standard.

fig coordinate systems with heading pitch roll
Figure 9. Coordinate systems with heading, pitch, and roll
  • vehicle-iso8855 A right-handed coordinate system with the origin at the center of the rear axle projected down to ground level. Note that the origin is attached to the rigid body of the vehicle and not to an axle suspended between it and the body. It is at ground level when the vehicle is nominally loaded but it may be above or below ground level, depending on the actual load. Similarly, the axis pointing forward may point slightly upwards or downwards relative to ground level depending on the front to back loading of the vehicle. The x-axis is forward, the y-axis to the left, and the z-axis upwards. See also the ISO 8855 specification [11].

600
Figure 10. Vehicle coordinate system, ISO 8855
  • odom A 3D cartesian coordinate system that is approximately fixed in the world. The transform between the vehicle-iso8855 coordinate system and odom is guaranteed to be continuous so that it varies smoothly over time.

The transform between odom and map-UTM may be discontinuous. That means there may be sudden jumps in the value of the transform. The odom origin is often the starting point of the vehicle at the time the system is switched on. See the ROS documentation [14].
  • map-UTM A 3D cartesian coordinate system useful for mapping moderately sized regions of the Earth. It is locked to the Earth and is a set of slices of flat coordinates that cover the Earth. See the UTM specification [15].

  • geospatial-wgs84 A 3D ellipsoidal coordinate system used for GNSS systems, meaning latitude, longitude, and altitude. It is fixed to the Earth, which means that it ignores, for example, continental drift, and covers the entire Earth.

For common use cases, there may be several sets of coordinate systems (blue boxes) and transforms between them that are commonly used, as the following diagrams show.

fig transform much.drawio
Figure 11. Example of a transform of a multi-sensor setup into a geospatial coordinate system

Figure 11 shows an example of a Robot Operating System (ROS) based system.

The sensors described in the example system in the introduction might have the following coordinate systems and transform tree.

fig transform mid.drawio
Figure 12. Example of a transform of a camera and GPS sensor setup into a geospatial coordinate system

Figure 12 shows how a set of data captured from a dash-cam, a single camera including a GPS, might look like.

fig transform low.drawio
Figure 13. Example of a transform of a camera setup into a odom coordinate system

Figure 13 shows how a single camera with no other data, with the movement of the camera deduced by structure from motion, might look like.

Related topics

6.5. Semantic segmentation

Semantic image segmentation, also called pixel-level classification, is the task of clustering those parts of an image together which belong to the same object class. Technically, it means assigning to each pixel a value/code corresponding to a certain class of interest (object/entity category).

The semantic segmentation task treats objects as stuff, which is amorphous and uncountable. Multiple objects of the same class are treated as a single entity. Thus, no information exists about specific instances of a class. Cars are all assigned a color code, for example blue, and are treated as being part of the same amorphous "car stuff".

Semantic segmentation annotations follow the form of the objects and have no fixed shape. Manually, this is usually achieved by drawing refined polygons around the regions of interest, or by painting the region of interest through a paintbrush-like feature. The result is a precise mask that isolates only the object of interest and no surrounding pixels.

In the 2D annotation space this method provides the highest accuracy of the objects. However, this comes at an increased cost in comparison with other annotation methods. Furthermore, segmentations take up more time during the labeling process than other 2D annotation methods and thus have lower throughput.

This section contains concepts that are relevant for multi-sensor data labeling use cases.

6.5.1. Formal definition

Formally, semantic segmentation can be defined as follows:

Let \(P={p_{1}, p_{2}, ... p_{p}}\) be the set of all the pixels in a given frame, for example, an image.

Then, the cardinality \(|P|\) is equal to the number of pixels in such a frame.

Let \(C={c_{1}, c_{2}, ... c_{c}}\) be the set of all the classes that are defined for a labeling task, for example, \(c_1=car, c_2=pedestrian\).

Then, the cardinality \(|C|\) is equal to the number of classes that are defined for such a task.

To perform semantic segmentation labeling on an image, it means establishing a relation that is valid when a pixel \(p_{x}\) represents a portion of an object belonging to one of the defined classes \(c_{y}\).

\(R_{seg}\) can be defined as a relation between the sets \(P\) and \(C\). Formally, this means defining a subset of the cartesian product \(R_{seg} \subset P \times C\), where \(P \times C = { (p_{1},c_{1}), (p_{1},c_{2}), ... (p_{n},c_{m}) }\)

Let \(D \subseteq P\) be the domain of the semantic segmentation relation \(R_{seg}\), the following taxonomy is produced:

Semantic segmentation taxonomy

  • Partial scene segmentation when \(\exists p_{x} \in P: (p_{x}, c_{y}) \notin R_{seg}\). There are some pixels that have no classes associated with them. In this case \(D \subset P\).

  • Full scene segmentation when \(\forall p_{x} \in P, \exists c_{y} \in C : (p_{x},c_{y}) \in R_{seg}\). All pixels have a class associated. In this case \(D\) coincides with \(P\). Note that in the use case, despite the class unlabeled or other indicating all pixels outside of the real classes of interest, there is still a form of full scene segmentation performed.

  • Single-class per pixel segmentation when \(\forall p_{x} \in D, \exists! c_{y} \in C: (p_{x},c_{y}) \in R_{seg}\). This is the case when each labeled pixel is associated with exactly one class.

  • Multi-class per pixel segmentation when \(\exists p_{x} \in D, \exists c_{1}, c_{2}... c_{k} \in C: (p_{x},c_{1}), (p_{x},c_{2}), ...(p_x,c_{k}) \in R_{seg}\). This is the case when at least one labeled pixel is associated with more than one class.

6.5.2. Instance segmentation

Instance segmentation enriches the semantic segmentation information, adding a separation among specific different instances of objects belonging to a class. This method is used to separate stuff into individual, countable things. Semantic classes can be either things (objects with a well-defined shape, for example a car, a person) or stuff (amorphous background regions, for example grass, sky). In contrast with semantic segmentation task, where each pixel belongs to a set of predefined classes, in instance segmentation the number of instances is not known before.

Formal definition

Formally, instance segmentation can be defined as an extension of semantic segmentation as follows:

  • Let \(I={i_{1}, i_{2}, ...i_{n}}\) be the set of all the instances of countable objects in the scene (image).

  • Then the cardinality of the set \(|I|\) is equal to the total number of object instances that populate the scene.

  • To perform instance segmentation labeling on an image, it means establishing a ternary relation \(I_{seg} \in P \times C \times I\) that is valid when a pixel \(p_{x}\) represents a portion of an object belonging to one of the defined classes \(c_{y}\) and to a specific object instance \(i_{z}\). \(P \times C \times I = { (p_{1},c_{1},i_{1}), (p_{1},c_{1},i_{2}), ... (p_{n},c_{m},i_{l}) }\)

Instance awareness may be added to any kind of semantic segmentation described before by extending the relation to an additional instance set.

Let \(D_{in} \subseteq P\) be the domain of the instance segmentation relation \(I_{seg}\).

  • Instance unique segmentation when \(\forall p_{x} \in D_{in}, \exists! c_{y} \in C, \exists! i_{z} \in I: (p_{x},c_{y},i_{z}) \in I_{seg}\). This is the case when each labeled pixel is associated with exactly one class and exactly one instance of that class.

  • Multi-class multi-instance segmentation when \(\exists p_{x} \in D_{in}, \exists c_{1},c_{2}, ... c_{c} \in C, \exists i_{1},i_{2},... i_{i} \in I : (p_{x},c_{1},i_{1}),(p_{x},c_{1},i_{2})... (p_{x},c_{c},i_{i}) \in I_{seg}\). This is the case when each labeled pixel may be associated with more than one class and with more than one instance of those classes.

Starting from this general definition, all possible particular cases, permutations, or ways to construct semantic and instance segmentation labeling can be covered.

Related topics

7. Multi-sensor data labeling

7.1. Introduction

Multi-sensor data labeling is the process of enriching data streams with information on the location and the characteristics of labeled objects or the entire scenario at a given point in time.

Labels summarize relevant semantic entities and show their spatiotemporal location within the data through spatiotemporal constructs, such as labeling geometries. There are different types of labeling geometries. Each type provides a suitable input representation for specific computer vision and machine-learning tasks.

This chapter covers multi-sensor data labeling in detail, including the following topics:

  • List the raw data considered relevant for the multi-sensor data labeling use case.

  • Introduce and describe in detail the annotation schema, its structure, elements, and the different ways of expressing labeling geometries, coordinate systems, transforms, and other information relevant for multi-sensor data labeling.

  • Describe the mechanisms that govern the reference to external knowledge repositories, such as ontologies, that organize and define the semantics of the labels.

  • Supported data types and their representation.

  • Provide examples that show how to utilize the schema to produce valid annotation instances in relevant specific cases.

Related deliverables

Related topics

7.1.1. Raw data sources for multi-sensor data labeling

Examples for raw data sources:

  • Images

  • Videos

  • Point clouds

7.2. Annotation schema

The annotation schema defines the structure of annotations, data types, and conventions needed to unambiguously interpret the annotations. The annotation data format specifies how the annotation data is encoded for storage in computer files.

The annotation schema is described and formatted as a JSON schema. It defines the shape which valid JSON annotation instances shall conform to. The structure of the ASAM OpenLABEL annotation schema is serialized in the ASAM OpenLABEL JSON schema file. The annotation schema itself conforms to the JSON schema Draft 7 specification [13].

The annotation schema of ASAM OpenLABEL addresses the following general features related to multi-sensor data labeling:

  • Labeling different spatiotemporal objects.

  • Static and dynamic (time) properties of objects.

  • Geometric and non-geometric attributes for objects.

  • Nested attributes.

  • Management of coordinate systems, odometry and sensor configuration.

  • Multi-source (sensor) annotations for objects.

  • Persistent identities of objects through time.

  • Linkage to ontologies and external resources.

  • Relations between elements, for example, object performs action.

  • Different type of elements: objects, actions, events, and contexts.

  • Customizable and optional fields.

The annotation schema defines three main characteristic aspects of annotation data:

  • Structure: How data is organized, using hierarchies and key-value dictionaries.

  • Types: Primitive data types for key-value items.

  • Conventions: Documented interpretation of data values.

The annotation schema for multi-sensor data labeling follows the same principles of the annotation schema for scenario tagging, meaning JSON and JSON schema, as described in chapter Scenario tagging.

7.3. Structure

The ASAM OpenLABEL annotation schema for multi-sensor data labeling is structured as a dictionary and can be described from top to bottom. This section contains diagrams intended to visualize the structure. The details of the structure can all be consulted at the ASAM OpenLABEL JSON schema file.

Any ASAM OpenLABEL JSON data shall have a root key named openlabel. Its value is a dictionary containing the rest of the structure as described in the next sections. The version of the schema shall be defined inside the metadata structure, using the key schema_version. All other entries are optional.

JSON example

1
2
3
4
5
6
7
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        }
    }
}

The following example shows a JSON payload corresponding to the first level items inside the root openlabel value, which are related to multi-sensor data labeling.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
    "openlabel": {
        "objects": { ... },
        "actions": { ... },
        "events": { ... },
        "contexts": { ... },
        "relations": { ... },
        "frames": { ... },
        "frame_intervals": { ... },
        "metadata": { ... },
        "ontologies": { ... },
        "resources": { ... },
        "coordinate_systems": { ... },
        "streams": { ... }
    }
}

For multi-sensor data labeling, the ASAM OpenLABEL structure defines dictionaries for the elements, meaning objects, actions, events, contexts, and relations. Each entry of the dictionary is a key-value pair where the key is a unique identifier of the element, for example, an object. The value is the container of static information.

Supporting structures define the following:

  • ontologies that are used.

  • External resources to enable linked data.

  • coordinate_systems to explicitly specify how to transform data.

  • streams which contain information on the data being labeled, for example, sensor information, such as intrinsic calibration parameters of cameras.

In case time-information is needed, for example, for labeling video sequences, the item frames contains a dictionary of containers at frame level. frame_intervals summarize the frame intervals that contain information for this ASAM OpenLABEL annotation file.

fig openlabel format labeling.drawio
Figure 14. ASAM OpenLABEL labeling structure

Figure 14 shows the ASAM OpenLABEL data structure for multi-sensor data labeling.

fig openlabel format frames.drawio
Figure 15. ASAM OpenLABEL frame structure

Figure 15 shows the structure of the frame value. Its structure is similar to the openlabel value as it contains dictionaries for the elements, meaning objects, actions, events, contexts, and relations. Only the dynamic information inside them is detailed.

In addition, frame_properties may contain information about timestamping details, or transforms of specific coordinate systems and other stream properties.

Annotation data is stored as element data, for example, object_data, which each element may contain in the form of arrays of structures, organized per data type.

fig openlabel format attributes generic.drawio
Figure 16. ASAM OpenLABEL attributes

Figure 16 shows the structure of generic attributes (see Data types (generic)).

fig openlabel format attributes geometric.drawio
Figure 17. ASAM OpenLABEL geometric attributes

Figure 17 shows the structure of the geometric attributes (see Data types (geometric)).

7.4. Elements

objects, actions, events, contexts, and relations are elements. These structures share similar properties in terms of attributes, types, and hierarchies.

  • objects: A structure to represent information about physical entities in scenes. Examples of objects are pedestrians, cars, the ego-vehicle, traffic signs, lane markings, building, and trees.

  • actions: A description of semantically meaningful acts being done. They may be defined for several frame intervals, similar to objects, for example, isWalking.

  • events: Instants in time which have semantic load. events may trigger other events or actions, for example, startsWalking.

  • contexts: Other descriptive information about the scene that contains no spatial or temporal information and therefore is not targeted by actions or events, for example:

    • properties of the scene, such as Urban or Highway.

    • weather conditions, such as Sunny or Cloudy.

    • general information about the location, such as Germany or Spain.

Attributes

  • uid: A unique identifier that determines the identity of the element. It can be a simple unsigned integer (from 0 upwards, for example 0) or a Universal Unique Identifier (UUID) of 32 hexadecimal characters, for example 123e4567-e89b-12d3-a456-426614174000. uid may not be sequential nor start at 0, which is useful for preserving identifiers from other label files.

  • name: A friendly identifier of the element, is not unique but employed by human users to rapidly identify elements in the scene, for example, Peter.

  • type: A semantic type of the element. It determines which class the element belongs to, for example, Car, Running, see Ontologies.

Optionally, elements may also have the following items:

  • ontology_uid: A string identifier of the ontology which contains the definition of the type of the element (see Ontologies).

  • Element data, for example object_data: Container of static information about the object (see Data types (geometric)).

  • Element data pointers, for example, object_data_pointers: Pointers to element data at frames (see Frames).

  • frame_intervals: An array of frame intervals where the element exists.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "objects": {
            "0": {
                "name": "car1",
                "type": "Car"
            }
        }
    }
}

The example shows a sample object with the mandatory items name and type.

JSON only permits keys to be strings. Therefore, the integer unique identifiers shall be stringified: 0. However, carefully written APIs can parse JSON strings into integers for better access efficiency and sorting capabilities.

Rules

  • All elements shall have a uid as key.

  • The uid shall be unique for each element type.

  • Each element type (action, object, event, context, and relation) may have its own list of unique identifiers.

  • All elements shall have a type.

  • All elements shall have a name. The entry can be left empty as they are not used to index the elements.

7.4.1. Element data

The main mechanism to add information about an element is to define element data, using the data types defined in Data types (geometric). Element data can be added statically or dynamically.

Rules

  • Static element data shall be added at the element value, under the corresponding key, for example, object_data.
    Static element data specifies the type of data used, for example, bbox or vec, which becomes the key for an array of such data types in order to have one or more of those data types.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
   "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "objects": {
           "0": {
               "name": "pedestrian1",
               "type": "Pedestrian",
               "object_data": {
                   "bbox" : [{
                            "name" : "body",
                            "val" : [303.73, 935.58, 135.62, 330.88]
                        }, {
                            "name" : "head",
                            "val" : [289.93, 814.08, 38.20, 39.96]
                        }
					]
               }
           }
       }
   }
}

The example shows a single object of type Pedestrian with two bbox items, one to describe the body and the other for the head.

Rules

  • Dynamic element data shall be added similarly, but inside the corresponding frame (see Frames).

  • Element data may be nested inside other element data as attributes.

Only non-geometrical element data types can be nested (see Data types (geometric)).

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "objects": {
            "0": {
               "name": "car1",
               "type": "Car",
               "object_data": {
                   "bbox" : [{
                            "name" : "shape",
                            "val" : [100, 100, 500, 300],
                            "attributes": {
                                "boolean": [{
                                        "name": "visible",
                                        "val": true
                                    },
                                    {
                                        "name": "interpolated",
                                        "val": false
                                    }
                                ]
                            }
                        }
					]
                }
            }
        }
    }
}

The example shows string and num attributes added to a bbox.

Attributes are nested just like any other element data and therefore can contain arrays of element data, indexed by type.

7.4.2. Universal Unique Identifiers (UUID)

UUIDs in this specification are derived by using RFC 4122 [16].

When using UUIDs, the keys are substituted by 32 hexadecimal character strings.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
   "openlabel": {
       "metadata": {
            "schema_version": "1.0.0"
        },
        "objects": {
           "c44c1fc2-ee48-4b17-a20e-829de9be1141": {
               "name": "van1",
               "type": "Van"
           }
       }
   }
}

The example shows that the key identifier of an object is a string containing 32 hexadecimal characters following the UUID convention.

7.5. Frames

In frames all dynamic (temporal) information of the annotations shall be specified at frame level. Each frame is indexed within the ASAM OpenLABEL JSON data with an integer number.

The frame number is a ASAM OpenLABEL identifier of a certain instant in time. Properties of the frame can be specified to match specific timestamps or frame indexes in video sequences (see Frame properties).

In multi-stream annotation data, a frame may represent several time instants as sensor data might not be perfectly aligned (see Synchronization).

Class

frame

A frame is a container of dynamic, timewise, information.

Additional properties:

false

Type:

object

Diagram
Figure 18. Diagram of the frame class
Table 6. Properties of the frame class
Name Type Additional properties Reference Description

actions

object

false

#/definitions/action_data

This is a JSON object that contains dynamic information on OpenLABEL actions. Action keys are strings containing numerical UIDs or 32 bytes UUIDs. Action values may contain an "action_data" JSON object.

contexts

object

false

#/definitions/context_data

This is a JSON object that contains dynamic information on OpenLABEL contexts. Context keys are strings containing numerical UIDs or 32 bytes UUIDs. Context values may contain a "context_data" JSON object.

events

object

false

#/definitions/event_data

This is a JSON object that contains dynamic information on OpenLABEL events. Event keys are strings containing numerical UIDs or 32 bytes UUIDs. Event values may contain an "event_data" JSON object.

frame_properties

object

true

#/definitions/stream

This is a JSON object which contains information about this frame.

objects

object

false

#/definitions/object_data

This is a JSON object that contains dynamic information on OpenLABEL objects. Object keys are strings containing numerical UIDs or 32 bytes UUIDs. Object values may contain an "object_data" JSON object.

relations

object

false

This is a JSON object that contains dynamic information of OpenLABEL relations. Relation keys are strings containing numerical UIDs or 32 bytes UUIDs. Relation values are empty. The presence of a key-value relation pair indicates the specified relation exists in this frame.

7.5.1. Frame intervals

The frame_intervals key defines the array of frame intervals for which the ASAM OpenLABEL JSON data contains information.

Class

frame_interval

A frame interval defines a starting and ending frame number as a closed interval. That means the interval includes the limit frame numbers.

Additional properties:

false

Type:

object

Diagram
Figure 19. Diagram of the frame interval class
Table 7. Properties of the frame interval class
Name Type Description

frame_end

integer

Ending frame number of the interval.

frame_start

integer

Initial frame number of the interval.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
   "openlabel": {
       "metadata": {
            "schema_version": "1.0.0"
        },
        "frame_intervals": [{
                "frame_start": 0, "frame_end": 1
            }, {
                "frame_start": 5, "frame_end": 7
            }
        ],
        "frames": {
            "0": { ... },
            "1": { ... },
            "5": { ... },
            "6": { ... },
            "7": { ... }
        }
    }
}

The example shows frames indexed as 0, 1, 5, 6, and 7. The frame_intervals show the corresponding two intervals.

Frame intervals are also properties of elements, specifying the periods of time where the element exists or has data. Using several frame intervals makes it possible to explicitly declare time gaps where the element disappears or does not exist, while maintaining the same uid.

Inside each frame, dynamic information about elements may be included, using the same structure defined for elements.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
   "openlabel": {
       "metadata": {
            "schema_version": "1.0.0"
        },
       "frames": {
           "0": {
               "objects": {
                   "1": {}
               }
           },
           "1": {
               "objects": {
                   "1": {}
               }
           }
       },
       "objects": {
           "1": {
               "name": "van1",
               "type": "Van",
               "frame_intervals": [{"frame_start": 0, "frame_end": 1}]
           }
       }
   }
}

The example shows an object which exists in frames 0 and 1 but has no specific information at those frames.

If the specific information of the object for a given frame is nothing but its existence, then the object information at such frame is just a pointer to its unique identifier, as shown in the example above.

When frame-specific information is added, it is enclosed as object_data inside the corresponding frame and object (see Element data).

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "frames": {
            "0": {
                "objects": {
                    "1": {
                        "object_data": {
                            "bbox": [{
                                   "name": "shape",
                                   "val": [12, 867, 600, 460]
                                }
                            ]
                        }
                    }
                }
            },
            "1": { ... }
        },
        "objects": {
            "1": {
                "name": "van1",
                "type": "Van",
                "frame_intervals": [{"frame_start": 0, "frame_end": 1}]
            }
        }
    }
}

The example shows an object which exists in frames 0 and 1. The object has specific geometric information, for example, a bbox named shape at frame 0.

7.5.2. Element data pointers

Since element data is not indexed by integer unique identifiers, such as elements, the structure defines a mechanism to have an index over each element data by adding element data pointers. For example, object_data_pointers within an object contain key-value pairs to identify which object_data names are used and which are their associated frame_intervals.

Class

element_data_pointers

This is a JSON object which contains OpenLABEL element data pointers. Element data pointer keys shall be the "name" of the element data this pointer points to.

Additional properties:

false

Type:

object

Diagram
Figure 20. Diagram of the element data pointers class

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
{
   "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "objects": {
           "0": {
               "name": "car0",
               "type": "car",
               "frame_intervals": [{"frame_start": 0, "frame_end": 10}],
               "object_data": {
                   "text": [{
                           "name": "color",
                           "val": "blue"
                       }
                   ],
                },
                "object_data_pointers": {
                    "color": {
                        "type": "text",
                    },
                    "shape": {
                        "type": "bbox",
                        "frame_intervals": [{"frame_start": 0, "frame_end": 10}],
                        "attributes": {
                            "visible": "boolean"
                        }
                    }
                }
            }
        },
        "frames": {
            "0": { ... },
            ...
            "10": { ... }
        }
        ...
    }
}

The example shows that the pointers may refer to static (frame-less, color attribute) and dynamic (frame-specific, shape attribute) object_data and also contains information about the nested attributes (visible attribute of shape).

This feature is useful for rapidly retrieving element data information from the ASAM OpenLABEL JSON data, without the need to explore the entire set of frames.

7.5.3. Frame properties

Frame properties may include three types of details about the frame:

  • timestamp: A relative or absolute time reference that specifies the time instant this frame corresponds to.

  • streams (see Streams): Sensors may have dynamic properties for a certain specific instant, such as intrinsic calibration data or sync details (see Synchronization).

  • transforms: Coordinate systems may have changed their relative position with respect to parent coordinate systems for specific frames (see Coordinate Systems and Transforms).

Class

frame

A frame is a container of dynamic, timewise, information.

Additional properties:

false

Type:

object

Diagram
Figure 21. Diagram of the frame class
Table 8. Properties of the frame class
Name Type Additional properties Reference Description

actions

object

false

#/definitions/action_data

This is a JSON object that contains dynamic information on OpenLABEL actions. Action keys are strings containing numerical UIDs or 32 bytes UUIDs. Action values may contain an "action_data" JSON object.

contexts

object

false

#/definitions/context_data

This is a JSON object that contains dynamic information on OpenLABEL contexts. Context keys are strings containing numerical UIDs or 32 bytes UUIDs. Context values may contain a "context_data" JSON object.

events

object

false

#/definitions/event_data

This is a JSON object that contains dynamic information on OpenLABEL events. Event keys are strings containing numerical UIDs or 32 bytes UUIDs. Event values may contain an "event_data" JSON object.

frame_properties

object

true

#/definitions/stream

This is a JSON object which contains information about this frame.

objects

object

false

#/definitions/object_data

This is a JSON object that contains dynamic information on OpenLABEL objects. Object keys are strings containing numerical UIDs or 32 bytes UUIDs. Object values may contain an "object_data" JSON object.

relations

object

false

This is a JSON object that contains dynamic information of OpenLABEL relations. Relation keys are strings containing numerical UIDs or 32 bytes UUIDs. Relation values are empty. The presence of a key-value relation pair indicates the specified relation exists in this frame.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
    "openlabel": {
        "frames": {
            "0": {
                "frame_properties": {
                    "timestamp": "2020-04-11 12:00:01",
                    "streams": {
                        "Camera1": {
                        "stream_properties": {
                            "intrinsics_pinhole": {
                                "camera_matrix_3x4": [ 1000.0,    0.0, 500.0, 0.0,
                                                            0.0, 1000.0, 500.0, 0.0,
                                                            0.0,    0.0,   0.0, 1.0],
                                    "distortion_coeffs_1xN": [],
                                    "height_px": 480,
                                    "width_px": 640
                            },
                            "sync": {
                                "frame_stream": 1,
                                "timestamp": "2020-04-11 12:00:02"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

The example shows frame_properties of frame 0, containing information about a timestamp and some properties specific for frame 0 corresponding to stream Camera1.

The sync field within stream_properties defines the frame number of the stream that corresponds to this frame, along with timestamping information, if needed. This feature is useful for annotating multiple cameras which might not be perfectly aligned. In such cases, frame 0 of the ASAM OpenLABEL JSON data corresponds to frame 0 of the first stream to occur. In this way, frame_stream shall identify which frame of this stream corresponds to the frame in which it is enclosed.

7.5.4. Synchronization

This section provides detail on the synchronization of multiple streams and their time information frames.

Labels can be produced to be related to specific streams, for example, cameras and LiDAR. When multiple streams of this type are present and labels need to be produced for several of them, for example, bounding boxes for images of the camera and cuboids for the point clouds of the LiDAR, a synchronization and matching strategy is needed.

In determining the synchronization of the data streams, for example, images and point clouds correspond to the data source set-up and not to the annotation stage. That means that the data container may contain precise HW timestamps for images and point clouds. In addition, the correspondence between frame indexes for multiple cameras, for example, frame 45 of camera 1, corresponds because of proximity in time to frame 23 of camera 2 may be due to a different frequency they use or if they started with some delay.

Therefore, when producing labels for such different frames, the annotation format needs to allocate space and structure for such timing information. This shall be done in a way that all labels are easily associated with their corresponding data and time.

The JSON schema defines the frame data containers, which correspond to master frame indexes.

One stream

In many cases, there is a single stream of data that needs to be labeled, for example, an image sequence.

Simple case

The simplest use-case for a stream:

  • Nothing needs to be specified, for example, sensor names or timestamps.

  • Frame indexes are integers, starting from 0.

  • master frame index coincides with stream-specific frames index. This means stream-specific frame index is not labeled.

fig streams one stream.drawio
Figure 22. One stream

Figure 22 shows a simple timeline where frames represent discrete samples of time and are indexed using a master frame index.

JSON example

1
2
3
4
5
6
7
8
{
    "openlabel": {
        "frames": {
            "0": { ... },
            "1": { ... }
        }
    }
}

The example shows the indexing approach in ASAM OpenLABEL where frames are indexed using an ordered numeric string, for example, 0 and 1.

Stream frame index not coincident with master frame index

It is possible to define a specific frame numbering for stream-specific frames inside the master frame index, which always starts from 0. This means that these counts are non-coincident, and this reflects the fact that the stream indexes are discontinuous or start at a certain value.

fig streams one stream not coincident stream index and frame index.drawio
Figure 23. One stream (not coincident stream index and frame index)

Figure 23 shows a simple timeline where the master frame index starts at 0 and corresponds to a specific frame index of a stream, starting at 45.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
{
    "openlabel": {
        "frames": {
            "5": {
                "frame_properties": {
                    "timestamp": "2020-04-11ย 12:00:01",
                    "streams": {
                        "Camera1": {
                            "stream_properties": {
                                "sync": { "frame_stream": 91}
                            }
                        }
                    }
                }
            }
        }
    }
}

The example shows how the master frame index, for example, 5, can be linked to a stream-specific frame index, for example, 91, using stream_properties inside frame_properties.

Other properties, such as timestamps, may be added for detailed timing information of each stream frame.

fig streams one stream with timestamps and other properties.drawio
Figure 24. One stream (with timestamps and other properties)

Figure 24 shows a simple timeline with defined frames which span over a certain period of time, for example, corresponding to the exposure time of a camera.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
    "openlabel": {
        "frames": {
            "0": {
                "frame_properties": {
                    "timestamp": "2020-04-11ย 12:00:01",
                    "aperture_time_us": "56"
                }
            }
        }
    }
}

The example shows how a certain frame may have customized frame_properties, such as aperture_time_us, to define the exposure time in microseconds.

Multiple streams

Complex labeling examples may include multiple streams, for example, labels that need to be defined for different sensors.

Same frequency and same start and indexes

The master frame index coincides with each of the stream indexes. It is fully synchronized.

fig streams several streams same frequency.drawio
Figure 25. Several steams (same frequency and same start and indexes)

Figure 25 shows two timelines corresponding to two streams, Camera1 and Camera2, with stream-specific frame indexes coinciding with the master frame index.

Same frequency and different start and indexes

It is possible to define stream indexes independently to reflect, for example, that one stream is delayed by one frame but still synchronized.

fig streams several streams same frequency different starts.drawio
Figure 26. Several streams (same frequency and different start and indexes)

Figure 26 shows how two different timelines corresponding to two different streams can be shifted so the stream-specific frame indexes do not match with the master frame index. In the example, the master frame index = 1 corresponds to Camera1 in frame 1 and Camera2 in frame 80. Note, that in this example, for master frame = 0, there is no information about Camera2 to represent that this stream started producing information after the stream of Camera1.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
    "openlabel": {
        "frames": {
            "1": {
                "frame_properties": {
                    "timestamp": "2020-04-11ย 12:00:01",
                    "streams": {
                        "Camera1": {
                            "stream_properties": {
                                "sync": { "frame_stream": 1}
                            }
                        },
                        "Camera2": {
                            "stream_properties": {
                                "sync": { "frame_stream": 0}
                            }
                        }
                    }
                }
            }
        }
    }
}

The example shows how different stream specific frame indexes can be defined by a certain master frame index as frame_properties.

Other possible differences in synchronization, for example jitter, may be labeled by embedding timestamping information for each stream frame.

fig streams several streams jitter.drawio
Figure 27. Several streams containing jitter

Figure 27 shows another use-case where frames do not follow a perfectly periodic sampling rate. This feature can be labeled, adding a jitter variable as a frame_properties.

Same frequency and constant shift

If the frame shift is constant, a more compact representation is possible by specifying the shift at root stream_properties rather than on each frame, as was shown in the previous examples:

fig streams several streams constant shift.drawio
Figure 28. Several streams (same frequency and constant shift)

Figure 28 shows a specific case where the time shift between two streams (Camera1 and Camera2) is constant and kept fixed for the entire scene.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
    "openlabel": {
        "streams": {
            "Camera1": {
                "stream_properties": {
                    "sync": { "frame_stream": 0}
                }
            },
            "Camera2": {
                "stream_properties": {
                    "sync": { "frame_stream": 1}
                }
            }
        }
    }
}

The example shows how to represent a fixed time shift between a certain stream and the master frame index as stream_properties instead of as frame_properties. In the example, Camera2 is shifted one frame ahead of the master frame index, while Camera1 has shift 0.

Different frequency

Streams might represent data coming from sensors with different capturing frequency, for example, a camera at 30 Hz and a LiDAR at 10 Hz. Following the previous examples, it is possible to embed stream frames inside master frames so the frequency information is also included.

fig streams several streams different frequency.drawio
Figure 29. Several streams (different frequency)

Figure 29 shows a typical configuration where the master frame index follows the fastest stream, in this case the Camera1 stream.

fig streams several streams different frequency and type.drawio
Figure 30. Several streams (different frequency)

Figure 30 shows a typical configuration where the master frame index follows the slowest stream, in this case the Lidar1 stream.

Specifying coordinate system for each label

After defining the coordinate systems (see Coordinate Systems and Transforms) and the timing information, as shown in the examples above, labels for elements and element data may be declared for specific coordinate systems.

Coordinate systems of specific streams can be defined as well. In this way, for each image the information about labels, timings and coordinate systems are given together.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
{
    "openlabel": {
        "frames": {
            "0": {
                "objects": {
                    "0": {
                        "object_data": {
                            "bbox": [
                                {
                                    "name": "shape2D",
                                    "val": [600, 500, 100, 200],
                                    "coordinate_system": "Camera1"
                                }
                            ],
                            "cuboid": [
                                {
                                    "name": "shape3D",
                                    "val": [ ... ],
                                    "coordinate_system": "Lidar1"
                                }
                            ]
                        }
                    }
                },
                "frame_properties": {
                    "streams": {
                        "Camera1": {
                            "stream_properties": {
                                "sync": { "frame_stream": 1, "timestamp": "2020-04-11 12:00:07"},
                            }
                        },
                        "Lidar1": {
                            "stream_properties": {
                                "sync": { "frame_stream": 0, "timestamp": "2020-04-11 12:00:10"}
                            }
                        }
                    }
                }
            }
        },
        "objects": {
            "0": {
                "name": "car1",
                "type": "car",
                "coordinate_system": "Camera1",
                ...
            }
        }
    }
}

The example shows that objects may be expressed with respect to a specific coordinate_system. For example, objects = 0 bounding box with the name shape2D is expressed with respect to the Camera1 coordinate system. The cuboid with name shape3D is expressed with respect to the Lidar1 coordinate system.

7.6. Streams

Complex scenes may be observed by several sensing devices, which produce multiple streams of data. Each of these streams might have different properties, for example, intrinsic calibration parameters and frequency. The ASAM OpenLABEL JSON schema defines the option to specify such information for a multi-sensor, and thus a multi-stream, which is set-up by allocating space for such stream-specific descriptions. In addition, it offers the ability to choose for each specific labeled element what stream they correspond to.

Class

streams

This is a JSON object which contains OpenLABEL streams. Stream keys can be any string, for example, a friendly stream name.

Additional properties:

false

Type:

object

Diagram
Figure 31. Diagram of the streams class
stream

A stream describes the source of a data sequence, usually a sensor.

Additional properties:

false

Type:

object

Diagram
Figure 32. Diagram of the stream class
Table 9. Properties of the stream class
Name Type Reference Description

description

string

Description of the stream.

stream_properties

#/definitions/stream_properties

Additional properties of the stream.

type

string

A string encoding the type of the stream.

uri

string

A string encoding the URI, for example, a URL, or file name, for example, a video file name, the stream corresponds to.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
   "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "streams": {
            "Camera1": {
                "type": "camera",
                "uri": "./some_path/some_video.mp4",
                "description": "Frontal camera",
                "stream_properties": {
                    "intrinsics_pinhole": {
                        "camera_matrix_3x4": [ 1000.0,    0.0, 500.0, 0.0,
                                                    0.0, 1000.0, 500.0, 0.0,
                                                    0.0,    0.0,   0.0, 1.0],
                        "distortion_coeffs_1xN": [],
                        "height_px": 480,
                        "width_px": 640
                    }
                }
            }
        }
    }
}

The example shows the item streams, which contains information about the streams that contain the data to be labeled. In the example, a stream with name Camera1 is defined to be of type camera and to have some stream_properties, such as intrinsic calibration parameters.

7.7. Coordinate systems

A coordinate system is a numerical system to specify the coordinates of points and other geometric elements in a given space.

ASAM OpenLABEL defines mechanisms to represent labels which are often related to numerical properties of objects, such as position, size, or other physical magnitudes. Different coordinate systems may exist in arbitrary scenes that contain objects. Therefore, labels that represent numerical magnitudes of the objects need to be specified with respect to specific coordinate systems.

ASAM OpenLABEL has been devised to consider scenes as Euclidean spaces and right-handed Cartesian coordinate systems, where coordinates specify the distance from the origin along the specified axis. 2D and 3D coordinate systems are considered.

Points and other geometries expressed with respect to a particular coordinate system can be expressed with respect to another coordinate system using transformations between the coordinate systems.

Labels may be defined as relative to specific coordinate systems. This is particularly necessary for geometric labels, such as polygons, cuboids, or bounding boxes, which define magnitudes under a certain coordinate system. For example, a 2D line may be defined within the coordinate system of an image frame, and a 3D cuboid inside a 3D Cartesian coordinate system.

Coordinate systems shall be declared with a friendly name, used as an index, and in the form of parent-child links to establish their hierarchy:

  • type: The type of coordinate system is defined so reading applications have a simplified view of the hierarchy:

    • scene_cs, this corresponds to static coordinate systems.

    • local_cs, this is a coordinate system of a rigid body, such as a vehicle, which carries with it the sensors.

    • sensor_cs, a coordinate system attached to a sensor.

    • custom_cs, any other coordinate system defined by the user.

type does not restrict the definition of complex coordinate system hierarchies. It is only intended to give a hint for parsing applications.
  • parent: Each coordinate system can declare its parent coordinate system in the hierarchy.

  • pose_wrt_parent: A default or static pose of this coordinate system with respect to the declared parent. It may be defined in several ways:

    • 4x4 homogeneous matrix

    • quaternion and translation

    • Euler angles and translation

If not defined, the coordinate system is assumed to be exactly the same as its parent coordinate system.
  • children: The list of children for this coordinate system.

In addition, as multiple coordinate systems may be defined, it is necessary to define mechanisms to declare how to convert values of magnitudes from one coordinate system to another. Therefore, transforms between two coordinate systems are also defined.

Class

coordinate_systems

This is a JSON object which contains OpenLABEL coordinate systems. Coordinate system keys can be any string, for example, a friendly coordinate system name.

Additional properties:

false

Type:

object

Diagram
Figure 33. Diagram of the coordinate systems class
coordinate_system

A coordinate system is a 3D reference frame. Spatial information on objects and their properties can be defined with respect to coordinate systems.

Additional properties:

true

Diagram
Figure 34. Diagram of the coordinate system class
Table 10. Properties of the coordinate system class
Name Type Required Reference Description

children

array

List of children of this coordinate system.

parent

string

true

This is the string UID of the parent coordinate system this coordinate system is referring to.

pose_wrt_parent

#/definitions/transform_data

JSON object containing the transform data.

type

string

true

This is a string that describes the type of the coordinate system, for example, "local", "geo").

7.8. Transforms

A transform is a mathematical expression which determines how a coordinate system relates to another. In ASAM OpenLABEL, transforms are composed of a rotation and a translation component in 3D Euclidean space. Transformations are understood as passive and are thus equivalent to positions between coordinate systems. Different alternatives are supported:

  • Quaternion and translation vector

  • 4x4 Homogeneous matrix

  • Vector of Euler angles with sequence code, and translation vector

Class

transform

This is a JSON object with information about this transform.

Additional properties:

true

Type:

object

Diagram
Figure 35. Diagram of the transform class
Table 11. Properties of the transform class
Name Type Required Reference Description

dst

string

true

The string UID, that is, the name, of the destination coordinate system for geometric data converted with this transform.

src

string

true

The string UID, that is, the name, of the source coordinate system of geometrical data this transform converts.

transform_src_to_dst

true

#/definitions/transform_data

JSON object containing the transform data.

transform_data

JSON object containing the transform data.

Diagram
Figure 36. Diagram of the transform data class

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "coordinate_systems": {
            "odom": {
                "type": "scene_cs",
                "parent": "",
                "children": [
                    "vehicle-iso8855"
                ]
            },
            "vehicle-iso8855": {
                "type": "local_cs",
                "parent": "odom",
                "children": [
                    "CAM_1",
                    "CAM_2"
                ]
            },
            "CAM_1" : {
				"type" : "sensor_cs",
				"parent" : "base",
				"children" : [],
				"pose_wrt_parent" : {
					"matrix4x4" : [0.984807753012208, 0.0, 0.17364817766693033, 2.3, 0.0, 1.0, 0.0, 0.0, -0.17364817766693033, 0.0, 0.984807753012208, 1.3, 0.0, 0.0, 0.0, 1.0]
				}
			},
            "CAM_2" : {
				"type" : "sensor_cs",
				"parent" : "base",
				"children" : [],
				"pose_wrt_parent" : {
					"euler_angles" : [0.0, 0.17453292519943295, 0.0],
					"translation" : [2.3, 0.0, 1.3],
					"sequence" : "ZYX"
				}
			}
        },
       ...
   }
}

The example shows the coordinate_systems item having several coordinate systems defined, including coordinate systems specific for the cameras (CAM_1 and CAM_2) and other coordinate systems for the local and scene-level frameworks.

The transforms between coordinate systems may also be defined for each frame, overriding the default static pose defined above.

Transforms are defined with a friendly name used as index and the following properties:

  • src: The name of the source coordinate system. This shall be the name of a valid (declared) coordinate system.

  • dst: The destination coordinate system. This shall be the name of a valid (declared) coordinate system.

  • transform_src_to_dst: This is the transform expressed in algebraic form, for example, as a 4x4 matrix enclosing a 3D rotation and a 3D translation between the coordinate systems.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
{
    "openlabel" : {
        "metadata" : {
            "schema_version" : "1.0.0"
        },
        "coordinate_systems" : {
            "base" : {
                "type" : "local_cs",
                "parent" : "",
                "children" : []
            },
            "world" : {
                "type" : "scene_cs",
                "parent" : "",
                "children" : []
            }
        },
        "frames" : {
            "10" : {
                "frame_properties" : {
                    "transforms" : {
                        "base_to_world" : {
                            "src" : "base",
                            "dst" : "world",
                            "transform_src_to_dst" : {
                                "matrix4x4" : [1.0, 0.0, 0.0, 0.1, 0.0, 1.0, 0.0, 0.1, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
                            }
                        }
                    }
                }
            },
            "11" : {
                "frame_properties" : {
                    "transforms" : {
                        "base_to_world" : {
                            "src" : "base",
                            "dst" : "world",
                            "transform_src_to_dst" : {
                                "euler_angles" : [0.0, 0.0, 0.0],
                                "translation" : [1.0, 1.0, 0.0],
                                "sequence" : "ZYX"
                            },
                            "custom_property1" : 0.9,
                            "custom_property2" : "Some tag"
                        }
                    }
                }
            }
        }
    }
}

The example shows that the relationship between coordinate systems can be defined with transforms which can be defined for specific frames inside frame_properties. In the example, the transform between base and world coordinate systems is defined for frames 10 and 11.

In general, coordinate systems associated with sensors may have the same name as the corresponding streams. For instance, Camera1 can be the name of a coordinate system and also the name of a stream. In this way, a sensor, such as a camera or a LiDAR, has internal data, for example intrinsics, defined at streams. External data is set-up with respect to other sensors at coordinate_systems or transforms at frame level.

With this structure, it is possible to describe particular and typical transformation cases, such as odometry poses of a vehicle with respect to a certain scene coordinate system:

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "frames": {
            "0": {
                "frame_properties": {
                    "transforms": {
                        "odom_to_vehicle-iso8855": {
                            "src": "odom",
                            "dst": "vehicle-iso8855",
                            "transform_src_to_dst": {
                                "matrix4x4": [1.0, 3.7088687554289227e-17, ...]
                            }
                        },
                        "raw_gps_data": [49.011212804408,8.4228850417969, ...],
                        "status": "interpolated"
                    }
                }
            }
        }
        ...
    }
}

The example shows a typical use case where the transforms encode the odometry, that is, the accumulated relative pose between a fixed coordinate system (in the example odom) and a moving coordinate system. In the example, vehicle-iso8855 represents the usual coordinate system of a moving vehicle located in the rear axle, following the ISO 8855 convention, specified in [11].

By using additional properties, it is possible to embed detailed and customized information about the transforms, such as additional non-linear coefficients. In the example, the entries for raw_gps_data are only exemplary.

7.9. Ontologies

The ontologies item shall contain pointers to knowledge repositories, for example, URLs of ontologies that are used in the ASAM OpenLABEL JSON data to define the semantic type of elements. Elements can then point to concepts in these ontologies, so an application may consult an element’s meaning or investigate additional properties.

The format of the pointers shall use a key-value structure, where the key is a non-constrained string as a unique identifier, and the value may be the URL of the ontology or knowledge repository.

Class

ontologies

This is the JSON object of OpenLABEL ontologies. Ontology keys are strings containing numerical UIDs or 32 bytes UUIDs. Ontology values may be strings, for example, encoding a URI. JSON objects containing a URI string and optional lists of included and excluded terms.

Additional properties:

false

Type:

object

Diagram
Figure 37. Diagram of the ontologies class

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "ontologies": {
            "0": "https://www.somedomain.org/ontology",
            "1": "https://www.someotherdomain.org/ontology"
        },
        "objects": {
            "0": {
                "name": "car1",
                "type": "Car",
                "ontology_uid": 0
            },
            "1": {
                "name": "person1",
                "type": "Person",
                "ontology_uid": 0
            },
            "2": {
                "name": "mobile_phone1",
                "type": "MobilePhone",
                "ontology_uid": 1
            }
        }
    }
}

The example shows that the objects car1 and person1 are of types Car and Person. The definition of these types can be found at the ontology with ontology_uid = 0. The definition of object mobile_phone1 can be found at the ontology with ontology_uid = 1.

7.10. Data types (geometric)

ASAM OpenLABEL defines geometric and non-geometric (generic) data types, which all together add the needed flexibility to represent any kind of information of labels or tags.

This section provides details about geometric data types for the multi-sensor data labeling use case. Examples of object_data are used, but the ASAM OpenLABEL JSON schema also includes definitions of action_data, event_data, and context_data. The difference is that only object_data can be of the geometric and non-geometric type.

Geometric object_data types are more complex and have specific fields. Also, these types may contain generic object_data as attributes.

Rules

  • objects shall have a unique identifier.

  • object_data shall have a unique name.

Related topics

7.10.1. Bounding boxes

Bounding boxes are geometric entities which enclose the shape of an object in Cartesian coordinates. Bounding boxes define minimum and maximum limits at each dimension so the entire object lies within the specified limits.

Bounding boxes are used to label objects and entities in 2D and 3D data representations, such as images or point clouds. Bounding boxes are useful as the most basic and compact representation of the position and size of an object. Bounding boxes have become the most popular labeling type for computer vision and machine learning because of its simplicity and good alignment with matrix operations in programming languages and hardware architectures.

There are three main bounding box types supported by ASAM OpenLABEL:

  • 2D bounding box

  • 2D rotated bounding box

  • 3D bounding box (cuboid)

2D bounding box (bbox)

A 2D bounding box is defined as a rectangle by an array of four floating point numbers:

Table 12. Attributes of the 2D bounding box
Attribute Unit Description

x

px

Specify the x-coordinate of the center of the rectangle.

y

px

Specify the y-coordinate of the center of the rectangle.

w

px

Specify the width of the rectangle in the x/y-coordinate system.

h

px

Specify the height of the rectangle in the x/y-coordinate system.

Table 12 shows the available attributes of a 2D bounding box.

fig bbox definition
Figure 38. 2D bounding box definition

Figure 38 shows a 2D bounding box on an image, enclosing an entire object defined by its center position (in pixels) and its width and height.

Class

bbox

A 2D bounding box is defined as a 4-dimensional vector [x, y, w, h], where [x, y] is the center of the bounding box and [w, h] represent the width (horizontal, x-coordinate dimension) and height (vertical, y-coordinate dimension), respectively.

Additional properties:

true

Type:

object

Diagram
Figure 39. Diagram of the bbox class
Table 13. Properties of the bbox class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

The array of 4 values that define the [x, y, w, h] values of the bbox.

JSON example

1
2
3
4
"bbox": [{
    "name": "head",
    "val": [400, 200, 100, 120]
}]

The example shows a 2D bounding box serialized in JSON. The center of the rectangle is specified by the points x=400 and y=200. The dimensions of the rectangle are specified by width=100 and height=120.

For complex set-ups, it is possible to define the coordinate_system in which these magnitudes are expressed.

JSON example

It is also possible to embed non-geometric object data.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
"bbox": {
    "name": "head",
    "val": [400, 200, 100, 120],
    "coordinate_system": "Camera1",
    "attributes" : {
        "boolean" : [{
                "name" : "visible",
                "val" : false
            }, {
                "name" : "occluded",
                "val" : false
            }
        ]
    }
}

The example shows non-geometric object data, such as visible and occluded, embedded in a bounding box.

An object can contain multiple bbox entries, for example, to represent the body, head, and arms of a human. The same applies to all other object_data.
2D rotated bounding box (rbbox)

A 2D rotated bounding box is defined as a 5-dimensional vector by five numbers:

Table 14. Attributes of the 2D rotated bounding box
Attribute Unit Description

x

px

Specify the x-coordinate of the center of the rectangle.

y

px

Specify the y-coordinate of the center of the rectangle.

w

px

Specify the width of the rectangle in the x/y-coordinate system (horizontal, x-coordinate dimension).

h

px

Specify the height of the rectangle in the x/y-coordinate system (vertical, y-coordinate dimension).

alpha

radians

Specifies the rotation of the rotated bounding box. It is defined as a right-handed rotation, meaning positive from x-axes to y-axes. The origin of rotation is placed at the center of the bounding box, meaning x, y.

Table 14 shows the available attributes of a 2D rotated bounding box.

fig rbbox definition
Figure 40. 2D rotated bounding box definition

Figure 40 shows a 2D rotated bounding box on an image, enclosing an entire object defined by its center position (in pixels), its width and height, and the rotation angle.

Class

rbbox

A 2D rotated bounding box is defined as a 5-dimensional vector [x, y, w, h, alpha], where [x, y] is the center of the bounding box and [w, h] represent the width (horizontal, x-coordinate dimension) and height (vertical, y-coordinate dimension), respectively. The angle alpha, in radians, represents the rotation of the rotated bounding box, and is defined as a right-handed rotation, that is, positive from x to y axes, and with the origin of rotation placed at the center of the bounding box (that is, [x, y]).

Additional properties:

true

Type:

object

Diagram
Figure 41. Diagram of the rbbox class
Table 15. Properties of the rbbox class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

The array of 5 values that define the [x, y, w, h, alpha] values of the bbox.

JSON example

1
2
3
4
"rbbox": [{
    "name": "outline",
    "val": [400, 200, 100, 120, 0.785]
}]

The example shows a 2D rotated bounding box serialized in JSON. The center of the 2D rotated bounding box is specified by the points x=400 and y=200. The dimensions of the 2D rotated bounding box are specified by the width=100 and height=120. The rotation of the 2D rotated bounding box is specified by alpha=0.785.

3D bounding box (cuboid)

A 3D bounding box is a cuboid in 3D Euclidean space. It is defined by position, rotation, and size. Position and size are defined as 3-vectors, while rotation can be expressed in two alternative forms, using 4-vector quaternion notation or 3-vector Euler notation (to be applied in ZYX order equivalent to yaw-pitch-roll order).

One option is that the cuboid is defined as (x, y, z, qa, qb, qc, qd, sx, sy, and sz), where:

Table 16. Attributes of the 3D bounding box (cuboid) using quaternion
Attribute unit Description

x

m

Specifies the x-coordinate of the 3D position of the center of the cuboid.

y

m

Specifies the y-coordinate of the 3D position of the center of the cuboid.

z

m

Specifies the z-coordinate of the 3D position of the center of the cuboid.

qa

Specify the quaternion in non-unit form (x, y, z, and w) as in the SciPy convention.

qb

Specify the quaternion in non-unit form (x, y, z, and w) as in the SciPy convention.

qc

Specify the quaternion in non-unit form (x, y, z, and w) as in the SciPy convention.

qd

Specify the quaternion in non-unit form (x, y, z, and w) as in the SciPy convention.

sx

m

Specifies the x-dimension of the cuboid or the x-coordinate.

sy

m

Specifies the y-dimension of the cuboid or the y-coordinate.

sz

m

Specifies the z-dimension of the cuboid or the z-coordinate.

Table 16 shows the available attributes of a 3D bounding box (cuboid) using quaternion. The quaternions conform to the SciPy convention [17].

Another option is that the cuboid is defined as (x, y, z, rx, ry, rz, sx, sy, and sz), where:

Table 17. Attributes of the 3D bounding box (cuboid) using Euler angles
Attribute unit Description

x

m

Specifies the x-coordinate of the 3D position of the center of the cuboid.

y

m

Specifies the y-coordinate of the 3D position of the center of the cuboid.

z

m

Specifies the z-coordinate of the 3D position of the center of the cuboid.

rz

rad

Specify Euler angles, rz = yaw.

ry

rad

Specify Euler angles, ry = pitch.

rx

rad

Specify Euler angles, rx = roll.

sx

m

Specifies the x-dimension of the cuboid or the x-coordinate.

sy

m

Specifies the y-dimension of the cuboid or the y-coordinate.

sz

m

Specifies the z-dimension of the cuboid or the z-coordinate.

Table 17 shows the available attributes of a 3D bounding box (cuboid) using Euler angles.

fig cuboid
Figure 42. 3D bounding box definition

Figure 42 shows a 3D bounding box (cuboid) on 3D space plot. The same cuboid can be expressed using the two defined alternatives: using Euler angles in ZYX order, or with a Quaternion. Note the center of the cuboid is used as origin of the cuboid coordinate system.

Class

cuboid

A cuboid or 3D bounding box. It is defined by the position of its center, the rotation in 3D, and its dimensions.

Additional properties:

true

Type:

object

Diagram
Figure 43. Diagram of the cuboid class
Table 18. Properties of the cuboid class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

true

List of values encoding the position, rotation and dimensions. Two options are supported, using 9 or 10 values. If 9 values are used, the format is (x, y, z, rx, ry, rz, sx, sy, sz), where (x, y, z) encodes the position, (rx, ry, rz) encodes the Euler angles that encode the rotation, and (sx, sy, sz) are the dimensions of the cuboid in its object coordinate system. If 10 values are used, then the format is (x, y, z, qx, qy, qz, qw, sx, sy, sz) with the only difference of the rotation values which are the 4 values of a quaternion.

JSON example

1
2
3
4
"cuboid": [{
    "name": "shape",
    "val": [12.0, 20.0, 0.0, 1.0, 1.0, 1.0, 1.0, 4.0, 2.0, 1.5]
}]

An alternative is defined by nine numbers, substituting the quaternion vector by 3 Euler angles (rx, ry, rz) and respectively defining the rotation of the object coordinate system in the x-, y- and z-axes. The rotation is assumed to be applied ZYX.

7.10.2. Semantic segmentation: image and poly2d

Semantic segmentation responds to the need for more detailed annotations by defining one or more labels per pixel of a given image (for details about the different possible use cases and semantic segmentation taxonomy, see concept Semantic segmentation and example Semantic segmentation).

To facilitate visual perception, a color code for each class may be specified. The information on a certain pixel belonging to a certain category is expressed by assigning a specific RGB value to that pixel, which visually represents that category.

In terms of the data format, such dense information can be tackled with different approaches. Each of them has different purposes or responds to different needs:

  • Separate images: Historically, semantic segmentation information has been stored as separate images, usually formatted as PNG images (lossless). This is the simplest approach and the one offering the smallest storage footprint. However, there are many separate files in the file system. Therefore, the main ASAM OpenLABEL JSON file may contain one or more URLs/URIs of these images.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
"objects": {
    "0": {
        "name": "",
        "type": "",
        "object_data": {
            "string": [
                {
                    "name": "semantic mask uri - dictionary 1",
                    "val": "/someURLorURI/someImageName1.png"
                },{
                    "name": "semantic mask uri - dictionary 2",
                    "val": "/someURLorURI/someImageName2.png"
                }
            ]
        }
    }
}
  • Embedded images: Image content can be written in code, using any image processing software. The code is expressed as a string in base64 and then embedded within the JSON file. This approach creates large JSON files (base64 adds 4/3 overhead) but mitigate the need to manage multiple files:

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
"objects": {
    "0": {
        "name": "",
        "type": "",
        "object_data": {
            "image": [
                {
                    "name": "semantic mask - dictionary 1",
                    "val": "iVBORw0KGgoAAAANSUhEUgAAAeAAAAKACAIAAADLqjwFAAAKu0lEQVR42u3dPW7VYBCGUSe6JWW6NCyDErEvKvaFKFkGDR0lfYiEkABFN8n9+fzOzDkFDRLgAT0a2Z/NtgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABc0uevb25MASCwzo8/CjRAYp0FGiC0zgINEFpngQYIrbNAA4TWWaABQuss0AChdRZogMQ0CzRAbp0FGiC0zgINEFpngQYIrbNAA4TWWaABQuv84d1PgQZIrLMNGiC0zrmBfv/2o7/U0r58+2QIcE6dH90aH0BgnQUaILTOjw6GCLBvmp+ssw0aILTOAg0QWmeBBgits0ADhNZ585AQYH2dn02zDRogt85VN+jjb6nt9RbizY/vD3f3/rGCOl+kzptbHCdU+LSf1W5Q51fVWaDPLfLJv45egzoL9M5dfvbXV2pQZ4FOSfOTv51MQ9c0n1xngd4zzTIN6izQ0WmWaVBngY5Os0yDOgt0dJplGtT5b0PfJAyvc7k/J6jzxetcdYM+513BcsmzSkOhOl8qzRM36LoLqVUaptV5VqCrN06jYVSdBwW6R900GubUeUqgO3VNo2FInUcEul/RNBom1Hlrfw66a8t8exp2T/O169x8g+69adqjoXedOwd6Qr80GhrXuW2g55RLo6FrnXsGelqzNBpa1nnzNTtAnQPT3HODnrlOWqKhX527BXpypzQamtV5G/u5UUCdw+vcKtBWSBOATnW2QQPqHFrnPoG2PJoDNKvz5pgdIM2ZdW6yQVsbTQP61XlzDxpQ58w6dwi0hdFMoGWdbdCAOofWeav+kNCqeGQyvuiPOtdNsw0aUOfcOgs0oM65Cgfa/Q3zgcZ1tkED6izQAOr8Sl71BgaluVCdbdCAOgv0pXkCZkqoc+8626ABdRZoAHUWaECdG9R5c4oDaFznumm2QQPqLNAA6izQgDr3qLNAA+os0Bfl/QuzQp3b13kreorj4e5ed14+K0NgQpr71XlziwNQZ4EGUGeBBtRZoAHU+Xq86g2UrHPvNNugAXUWaAB1FmhAnQV6Z96/MCXUWaAB1HkfTnEA6WmeWWcbNKDOAg2gznMC7QmY+aDOAg2gzgINqLM6/1H7FIcv9x+ZjCFQtM7SbIMG1FmgrYpmgjqrsw0aUGeBtjCaBqizQAPqPFWTb3E4zmF9pmKa1dkGDaizQFseTQB1VmeBBtRZoK2Qrh3UeR/dPtg/82mhOlOlztI8d4MG1FmgrZOuF3VWZ4HWLFeKOgu0crlGUGeB1i9XhzozO9CNK6bOqPMEh/ZX2O/gnTqTn2Z1tkFPLJo6o84CrdGuAtRZoNVNnVFnhge6dOPUGXUe6DDtgn+XrtBjQ2mmRJ2l2QY9rnrqjDrboOc2OnaVlmbUmcPw6w/MtDSjzgh0XKalGXVGoOMyLc2oMwIdl2lpplya1VmgIzJ9vVLrMuqMQF+4pCf3WpFRZwR6aa//a7cKo85ckP80dkW7QZ0RaECd+3CLA9RZmm3QgDoj0IA6CzSgzgg0oM4CDagzCZziAGlWZxs0oM4INKizOgs0oM4INKDOw3hICHPrLM02aECdEWhQZ3UWaECdEWhAnQUaUGcEGlBnnuWYHfRPszrboAF1RqBBndVZoAF1RqABdeYfHhJCwzpLsw0aUGcEGtRZnQUaUGcEGlBnBBrUmYKc4oDaaVZnGzSgzgg0qLM6I9Cgzgg0oM4INKgzXTjFAZXqLM02aECd2d+NEYA6I9CAOiPQoM4INKDOCDTMSrM6I9Cgzgg0qLM6I9Cgzgg0oM4INHSvszQj0KDOCDSoszoj0KDOCDSgzgg0qDMCDSxOszoj0KDOCDSoszoj0KDOCDSgzgg0qDMINKyvszQj0KDOCDSoszoj0KDOCDSgzgg0qDMINCxOszoj0KDOCDSoszoj0KDOINCgzgg0dK+zNCPQoM4INKizOiPQoM4g0KDOCDSoMwg0qDMCDdKszgg0qDMINOqszgg0qDMINKgzAg3t6yzNCDSoMwg06qzOCDSoMwg0qDMCDeoMAg2L06zOCDSoMwg06qzOCDSoMwg0qDMCDeoMAg3r6yzNCDSoMwg06qzOCDSoMwg0qDMCDeoMAg2L06zOCDSoMwg06qzOCDSoMwg0qDMINN3rLM0INKgzCDTqrM4INKgzCDSoMwg06gwCDeoMAo00qzMCDeoMAo06qzMINOoMAg3qDAJN+zpLMwIN6gwCjTqrMwg06gwCDeoMAo06g0DD4jSrMwg06gwCjTqrMwg06gwCDeoMAo06g0DD+jpLMwg06gwCjTqrMwg06gwCDeoMAo06AwLN4jSrMwg06gwCjTqrMwg06gwCDeoMAk33OkszCDTqDAKNOqszCDTqDAIN6gwCjToDAs3iNKszCDTqDAKNOqszCDTqDAg06gwCjToDAs36OkszCDTqDAKNOqszCDTqDAg06gwCjToDAs3iNKszCDTqDAi0OqszCDTqDAg06gwCTfc6SzMINOoMCLQ6qzMINOoMCDTqDAKNOgMCjToDAi3N6gwCjToDAq3O6gwCjToDAo06g0DTvs7SDAKNOgMCrc7qDAKNOgMCjToDAq3OgECzOM3qDAKNOgMCrc7qDAKNOgMCjToDAq3OgECzvs7SDAKNOgMCrc7qDAJtBOoMCDTqDAi0OgMCzeI0qzMINOoMCLQ6qzMg0OoMCDTqDAh09zpLMwg06gwItDqrMyDQ6gwINOoMCLQ6AwKNOgMCLc3qDAi0OgMCrc7qDAi0OgMCjToDAt2+ztIMCLQ6AwKtzuoMCLQ6AwKNOgMCrc4AAr04zeoMCLQ6AwKtzuoMCLQ6AwKNOgMCrc4AAr2+ztIMCLQ6AwKtzuoMCLQ6Awi0OgMCrc4AAr04zeoMCLQ6AwKtzuoMCLQ6Awi0OgMC3b3O0gwItDoDAq3O6gwItDoDCLQ6AwKtzgACvTjN6gwItDoDTA20OgMCrc4AAq3OgECrM4BA71lnaQYEWp0BpgZanQGBVmcAgVZnQKDVGUCgd0uzOgMCrc4AUwOtzoBAqzOAQKszQN1AO7ABCLQ6Awi0OgPUDbQ6AwKtzgACrc4AdQOtzoBAl0+zOgMCrc4AUwOtzgCJgVZngMRAqzNAYqAd2ABIDLQ6AyQGWp0BEgOtzgCJgVZngMRAqzNAXKAdpwNIDLQ6AyQGWp0BEgOtzgCJgVZngMRAqzNAYqAdpwNIDLQ6AyQGWp0BEgOtzgCJgVZngMRAqzPAmW4T/hDqDJAYaHUGeNJVbnG8/P6GOgMsDfQLG63OAEfsdotDnQGOO0gzQKbV56DVGSDICd+xAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAqOcXcO/DOJCe2z8AAAAASUVORK5CYII=",
                    "mime_type": "image/png",
                    "encoding": "base64"
                }
            ]
        }
    }
}
  • Polygons: Another option is to decompose the entire semantic segmentation mask into different classes or object instances. This approach has the benefit of identifying individual objects directly within the JSON file. Thus, a user application can directly read specific objects, without the need to load the PNG image and find the object of interest. The counterpart is an increased JSON size. Polygons (2D) can be expressed directly as lists of x,y-coordinates, using MODE_POLY2D_ABSOLUTE. However, this may create very large and redundant information. Lossless compression mechanisms can be applied to convert the, possibly long, list of x,y-coordinates into smaller strings:

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
"objects": {
    "0": {
        "name": "car1",
        "type": "#Car",
        "object_data": {
            "poly2d": [
                {
                    "name": "poly1",
                    "val": ["5","5","1","mBIIOIII"],
                    "mode": "MODE_POLY2D_SRF6DCC",
                    "closed": false
                }, {
                    "name": "poly2",
                    "val": [5,5,10,5,11,6,11,8,9,10,5,10,3,8,3,6,4,5],
                    "mode": "MODE_POLY2D_ABSOLUTE",
                    "closed": false
                }
            ]
        }
    }
}

The example shows the following:

  • RLE or Chain Code algorithms can losslessly compress a sequence of x,y-coordinates. The poly2d.py script is used for polyline poly1, and specified using mode MODE_POLY2D_SRF6DCC. Polyline poly2 is encoded with no compression, and thus the specified mode is MODE_POLY2D_ABSOLUTE.

  • Using polygons implies that labels are created at object-level, rather than image-level. This might be useful, for example, for searching applications that locate all objects of type car.

Using PNG masks, either as separate files or embedded inside the JSON file, is the preferred way to store labels for machine-learning applications. They do not search inside the masks, but rather move them directly into training pipelines.

7.10.3. Poly3d

A poly3d is an object_data that represents a polygon in 3D space. It is defined as a list of 3D points. The array is a concatenation of x,y,z-values, corresponding to the x,y,z-coordinate of each point with respect to the defined coordinate system. Therefore, the array shall always have a number of values multiple of 3.

Class

poly3d

A 3D polyline defined as a sequence of 3D points.

Additional properties:

true

Type:

object

Diagram
Figure 44. Diagram of the poly3d class
Table 19. Properties of the poly3d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

closed

boolean

true

A boolean that defines whether the polyline is closed or not. In case it is closed, it is assumed that the last point of the sequence is connected with the first one.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

List of numerical values of the polyline, according to its mode.

JSON example

1
2
3
4
5
6
"poly3D" : [{
    "closed" : false,
    "coordinate_system" : "vehicle_iso8855",
    "name" : "lane_marking",
    "val" : [557.02, 29.69, -1.63, 562.51, 29.97, -1.59, 568.00, 30.36, -1.58, 571.98, 30.76, -1.57]
}]

The example shows a poly3D object_data specified to have four points, and thus 4 x 3 = 12 values.

7.10.4. Mesh

mesh is a special type of object_data, which describes a complex structure with point-line-area hierarchies. It is intended to represent 3D meshes, where points, lines, and areas compose the mesh by defining their interrelations. The elements point, line, and area may have their own properties, just like any other object_data.

Class

mesh

A mesh encodes a point-line-area structure. It is intended to represent flat 3D meshes, such as several connected parking lots, where points, lines and areas composing the mesh are interrelated and can have their own properties.

Additional properties:

true

Type:

object

Diagram
Figure 45. Diagram of the mesh class
Table 20. Properties of the mesh class
Name Type Additional properties Reference Description

area_reference

object

false

#/definitions/area_reference

This is the JSON object for the areas defined for this mesh. Area keys are strings containing numerical UIDs.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

line_reference

object

false

#/definitions/line_reference

This is the JSON object for the 3D lines defined for this mesh. Line reference keys are strings containing numerical UIDs.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

point3d

object

false

#/definitions/point3d

This is the JSON object for the 3D points defined for this mesh. Point3d keys are strings containing numerical UIDs.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
"mesh" : [{
    "name" : "parkslot1",
    "point3d" : {
        "0" : {
            "name" : "Vertex0",
            "val" : [25, 25, 0],
        },
        "1" : {
            "name" : "Vertex1",
            "val" : [26, 25, 0],
        },
        "2" : {
            "name" : "Vertex2",
            "val" : [26, 26, 0],
        },
        "3" : {
            "name" : "Vertex3",
            "val" : [25, 26, 0],
        },
        "4" : {
            "name" : "Vertex4",
            "val" : [27, 25, 0],
        },
        "5" : {
            "name" : "Vertex5",
            "val" : [27, 26, 0],
        }
    },
    "line_reference" : {
        "0" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [0, 1],
        },
        "1" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [1, 2],
        },
        "2" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [2, 3],
        },
        "3" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [3, 0],
        },
        "4" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [1, 4],
        },
        "5" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [4, 5],
        },
        "6" : {
            "name" : "Edge",
            "reference_type" : "point3d",
            "val" : [5, 2],
        }
    },
    "area_reference" : {
        "0" : {
            "name" : "Slot",
            "reference_type" : "line_reference",
            "val" : [0, 1, 2, 3],
        },
        "1" : {
            "name" : "Slot",
            "reference_type" : "line_reference",
            "val" : [4, 5, 6, 1],
        }
    }
}]

The example shows an ideal object_data to describe complex parking areas, where parking lots can share lines and points. Properties of areas may define whether the parking lot is empty or used.

Mesh contains a dictionary of point3d. Their keys may be used to specify lines as a line_reference. This line_reference is also stored as a dictionary, so their keys may be used to specify areas as area_reference.

The elements point3d, line_reference, and area_reference are object_data. They may have attributes of non-geometric type, that is, boolean, text, num and vec. This gives them full flexibility to describe complex meshes.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
"6" : {
    "name" : "Edge",
    "reference_type" : "point3d",
    "val" : [5, 2],
    "attributes" : {
        "text" : [{
                "name" : "line_type",
                "val" : "dashed"
            }, {
                "name" : "line_color",
                "val" : "yellow"
            }
        ],
    }
}

The example shows a line_reference with attributes.

A line_reference shall have only two reference points, as a line is defined by two points. An area_reference may have as many line references as desired as it may represent a complex polyline.

7.10.5. Mat and binary

Matrices and binary data are a special form of data and may be expressed using types mat and bin object_data.

  • Matrices are defined by the number of rows, columns, and channels. The numerical values are stored as a list of numbers.

  • Binary data may be defined by an encoding format and data type.

mat is useful to define list of points, such as a 3xN array of N 3D points in homogeneous coordinates, which may be points from a point cloud file.

Class

mat

A matrix.

Additional properties:

true

Type:

object

Diagram
Figure 46. Diagram of the mat class
Table 21. Properties of the mat class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

channels

number

true

Number of channels of the matrix.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

data_type

string

true

This is a string that declares the type of the numerical values of the matrix, for example, "float".

height

number

true

Height of the matrix. Expressed in number of rows.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

Flattened list of values of the matrix.

width

number

true

Width of the matrix. Expressed in number of columns.

binary

A binary payload.

Additional properties:

true

Type:

object

Diagram
Figure 47. Diagram of the binary class
Table 22. Properties of the binary class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

data_type

string

true

This is a string that declares the type of the values of the binary object.

encoding

string

true

This is a string that declares the encoding type of the bytes for this binary payload, for example, "base64".

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

string

true

A string with the encoded bytes of this binary payload.

7.10.6. Point2d and Point3d

Point2d and Point3d are basic structures to define individual points in 2D and 3D space. They are object_data.

point2d and point3d are defined by their value, as a list of 2 and 3 floating point numbers.

In addition, point2d and point3d have an id attribute as a numerical identifier. This may be used to integrate them into larger structures, for example, a mesh.

Class

point2d

A 2D point.

Additional properties:

true

Type:

object

Diagram
Figure 48. Diagram of the point2d class
Table 23. Properties of the point2d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

id

integer

This is an integer identifier of the point in the context of a set of points.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

List of two coordinates to define the point, for example, x, y.

point3d

A 3D point.

Additional properties:

true

Type:

object

Diagram
Figure 49. Diagram of the point3d class
Table 24. Properties of the point3d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

id

integer

This is an integer identifier of the point in the context of a set of points.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

List of three coordinates to define the point, for example, x, y, z.

7.11. Resources

The resources item shall contain pointers to external resources, such as files or databases, which may contain additional information about elements labeled in the ASAM OpenLABEL data. Inside each resource, a unique identifier of the element shall be used to create the link.

An example is a lane marking labeling task. If an existing high-definition map exists in the form of an ASAM OpenDRIVE file, then road or lane elements labeled in ASAM OpenLABEL may exist in the map. Then, a link to the matched road or lane can be created using a resource_uid and a id at the resource.

Class

resources

This is the JSON object of OpenLABEL resources. Resource keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource values are strings that describe an external resource, for example, file name, URLs, that may be used to link data of the OpenLABEL annotation content with external existing content.

Additional properties:

false

Type:

object

Diagram
Figure 50. Diagram of the resources class

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
	"openlabel" : {
		"metadata" : {
			"schema_version" : "1.0.0"
		},
		"resources" : {
			"0" : "../resources/xodr/multi_intersections.xodr"
		},
		"objects" : {
			"0" : {
				"name" : "road1",
				"type" : "road",
				"resource_uid" : {
					"0" : "217"
				}
			},
			"1" : {
				"name" : "lane1",
				"type" : "lane",
				"resource_uid" : {
					"0" : "3"
				}
			}
		}
	}
}

The example shows that lane1 is labeled as an object of type lane. lane1 exists in the resource 0 with resource_uid 3. That means that the id of the lane inside the resource is 3.

7.12. Use cases

The following section provides practical use cases for ASAM OpenLABEL.

7.12.1. 2D bounding boxes

This use case shows object labeling with 2D bounding boxes in images.

Single image and sequences of images are presented separately to show the differences between static and dynamic labeling, for example, with a persistent ID for tracked objects.

Single image

The single image approach aims at adding bounding boxes to define the position and size of objects in a single image. Variants of this labeling task may include adding other properties of the object or attributes to the bounding boxes, for example, confidence values.

fig sample bbox
Figure 51. Example image

Figure 51 shows an exemplary traffic situation.

fig bboxes
Figure 52. Example image with resulting bounding boxes

Figure 52 shows the target dictionary of classes and their bounding boxes, including Car, Bus, Semaphore, and ZebraCross. Only some objects are marked with their respective class for demonstration purpose.

The ASAM OpenLABEL openlabel100_test_bbox_simple.json file contains basic bounding boxes defined for each object.

The ASAM OpenLABEL openlabel100_test_bbox_simple_attributes.json file contains extended properties of objects and bounding boxes.

7.12.2. 3D bounding boxes (cuboids)

This use case shows an example of 3D bounding boxes (or cuboids) labels. The example shows the creation of object labels in a sequence of point clouds obtained from a LiDAR sensor. Labels correspond to physical objects, that is, cars and pedestrians.

fig example cuboid lidar view
Figure 53. Example visualization of a cuboid in a point cloud view

Cuboids have been produced automatically using a 3D object detector on the point cloud. Figure 53 shows several cuboids which enclose the 3D points that correspond to physical objects, that is, cars and a pedestrian, in the point cloud.

In this example, the cuboids are expressed using the preferred quaternion-translation vector form, which implies that ten values define the cuboid data, as explained in 3D bounding box (cuboid).

The ASAM OpenLABEL openlabel100_example_cuboids.json file contains the labels for the entire scene, including transform entries for all frames representing the odometry values obtained with a differential GPS.

7.12.3. Point clouds

Labeling point clouds in ASAM OpenLABEL is performed using a similar approach to 2D image segmentation.

A point cloud is a set of 3D points, each of them corresponding to a certain 3D position in space. Each point may be given additional values, depending on the source sensor or process. LiDAR sensors, for example, usually attach a timestamp and intensity values to each point.

Labeling a point cloud means adding a label to each point which determines the class that point corresponds to, for example, car or pavement.

The number of points in a point cloud depends on the source sensor or application. When this number is large, for example, several millions, an encoding strategy is preferable in order to compress the disk space required to store the labels.

Labels correspond to classes, and integer indexes are used to encode class values. For example, class car can be encoded as 0, pedestrian as 1, pavement as 2, etc. A dictionary with the class-index map needs to be stored externally.

By indexing class labels as integers the set of labels of a point cloud is a list or sequence of integers, where the position of the label in the list shall correspond to the position of the point within the point cloud.

For example, 11122222222000000000000…​ is a list of label indexes, each of them labeling a 3D point as belonging to class 1, 2, etc.

In ASAM OpenLABEL two approaches are defined to represent such a list of labels.

One is to use an external file, for example, an external file which contains the data values, possibly in binary form.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "objects": {
    "0": {
      "name": "3DPointCloudSegmentation0",
      "type": "3DPointCloudSegmentation",
      "object_data": {
        "text": [{
          "name": "uri",
          "val": "http://semantic3d.net/data/sem8_labels_training.7z"
        }]
      }
    }
  }
}

The example shows an object which contains an URI to an external file containing the labels of the 3D point cloud.

The second option is to embed a stringified version of the label values into the ASAM OpenLABEL JSON payload.

A lossless compression approach is recommended to reduce the potentially large volume of data of this payload. For example, several million integers may be used to represent the labels of an entire point cloud.

Considering the nature of the labels of point clouds, that is, many repeated labels for 3D points that are close in space, a Run-Length-Encoding (RLE) mechanism may significantly compress it.

As an example, 11122222222000000000000 is converted into #3V1#8V2#13V0, where the number after character # defines the count of the repeated value, defined after character V. In this example, there are three consecutive 1 labels, then eight consecutive 2 labels. Using this approach, the compression ratio depends on the data but is always superior to 1.0 if in average the count of repeated values is at least four.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
  "objects": {
    "1": {
      "name": "3DPointCloudSegmentation1",
      "type": "3DPointCloudSegmentation",
      "object_data": {
        "binary": [{
          "name": "labels",
          "val": "#2142V6#21379V5#902V3#762V5#3V3#2195V2#36V6#11V2#2V6#2V2#17V6#2V2#4V6#2V2#10V6#720V2#1V6#1V2#3V6#3V2#42V6#50V2#2V6#3V2#25V6#12V2#5V6#1V2#12V6#12V2#1V6#2V2#3V6#1V2#20V6#57V2#5V6#7V2#1V6#1V2#7V6#3V2#29V6#2752V2#3V6#4V2#3V6#12V2#1V6#1V2#5V6#2V2#5V6#1V2#6V6#1V2#3V6#1V2#12V6#45V2#18V6#7V2#76V6#333V2#1V6#2V2#5V6#1V2#1V6#1V2#2V6#20V2#2V6#5V2#193V6#421V2#1V6#406V2#8V6#2V2#1V6#3V2#1V6#4V2#1V6#1V2#17V6#94V2#24V6#1V2#33V6#7V2#2V6#51V2#74V6#640V2#1V6#4V2#12V6#2V2#21V6#16V2#63V6#1154V2#3V6#2502V2#3V3#1V2#121V3#76V2#26V3#354V2#1V3#1V2#6V3#3V2#1V3#6V2#6V3#1V2#2V3#5V2#2V3#5125V2#10812V3#36244V2#2V5#1V2#32V5#17V2#2V5#1V2#18V5#7V2#29V5#3V2#1V5#8V2#4V5#5V2#2V5#1V2#20V5#19V2#4V5#8V2#1V5#9V2#93V5#548V2#2V5#2V2#5V5#1V2#1V5#2V2#66V5#380V2#4V5#6V2#1V5#1V2#2V5#1V2#56V5#5V2#1V5#1V2#1V5#5V2#3V5#5V2#1V5#3V2#19V5#3V2#2V5#5V2#4V5#5V2#2V5#1V2#3V5#3V2#99V5#7V2#1049V5#11748V2#174V3#1195V2#1V3#1V2#1V3#3V2#1V3#7V2#17V3#34V2#24V3#8992V2#1V3#31V2#1V3#2V2#2V3#9655V2#1V3#2V2#20V3#7V2#2V3#3V2#39V3#4V2#13V3#3V2#6V3#2V2#1V3#3V2#6V3#1V2#20V3#7V2#6V3#8V2#1V3#1V2#112V3#5V2#273V3#2V2#494V3#4V2#472V3#32V2#5V3#2V2#5V3#7V2#16V3#3V2#3V3#12212V2#46972V5#231V2#1V5#2V2#1V5#6V2#4V5#1V2#1V5#4V2#2V5#2V2#65V5#14V2#1V5#2V2#2V5#6V2#1V5#2V2#26V5#8V2#47V5#7V2#4V5#6V2#29V5#2V2#1V5#1V2#1V5#4V2#7V5#1V2#136V5#4V2#1V5...",
          "data_type": "",
          "encoding": "rle"
        }]
      }
    }
  }
}

In this example, a pseudo JSON object is shown with a RLE encoded payload of a list of labels indexes embedded inside a binary element where the encoding type is specified to be rle.

The RLE-based encoding and decoding process can be implemented very efficiently and can be thought as an equivalent to embedding PNG images payloads for 2D semantic segmentation.
The RLE-based compression ratio is about 1:1000 for the examples of the dataset, where labels are provided as CSV files [18]. For example, point cloud bildstein_station1 contains ~29 million points. The labels file (CSV) has 3 bytes per point (label, whitespace, and separator) which makes ~89 MB (if whitespace is removed, this is ~58 MB). The ASAM OpenLABEL RLE-based approach produces a 88 kB JSON file.
fig point cloud rgb
Figure 54. 3D point cloud bildstein_station1 [18]
fig point cloud class
Figure 55. 3D point cloud segmentation bildstein_station1 [18]

Figure 54 shows a render of a 3D point cloud, colored according to RGB values obtained with a camera sensor. Figure 55 shows the same render coloring 3D points according to their associated class.

The ASAM OpenLABEL openlabel100_point_cloud_labels_rle.json file contains the 3D point cloud segmentation of bildstein_station1 using RLE encoding.

7.12.4. Semantic segmentation

This use case shows a complete ASAM OpenLABEL JSON file corresponding to a semantic segmentation of an image labeled at pixel-level. Two variants are considered:

  • Class-level annotation

  • Instance-level annotation

The input data are PNG images from existing open-source datasets which contain semantic segmentation at pixel level. The output are ASAM OpenLABEL JSON files covering different encoding options.

Class labels

The class label approach is to label an image in a way that each pixel is categorized as belonging to a certain class.

The example image and dictionary of classes are derived from the Mapillary Vistas Dataset (image -3-MmXdwhyIQhtb4-8NqHQ) [19].

fig sample segmentation class
Figure 56. Example of a PNG-colored image [19]

These types of labels are represented as PNG images, where each pixel is painted with a certain RGB color according to its class, as shown in Figure 56. To parse the labels, a PNG image is needed, along with a configuration file which contains the class dictionary. This dictionary maps RGB colors to classes.

There are three example JSON files for this use case:

Instance labels

These labels are frequently represented as PNG images with instance-coded classes. That means that each RGB color value corresponds to a class in a dictionary and an instance identifier.

fig sample segmentation instance
Figure 57. Example of an image with contrast enhanced [19]

Figure 57 shows an example instance image from the Mapillary Vistas Dataset (image -3-MmXdwhyIQhtb4-8NqHQ) with contrast enhanced for better visualization [19].

There are three example JSON files for this use case:

Full and partial scene segmentation

This use case shows examples for partial and full scene segmentation.

fig example semantic segmentation original image
Figure 58. Example of an original image used for semantic segmentation

Figure 58 shows an example image of a typical traffic scene.

Note the following:

  • It contains instantiable objects (cars).

  • It contains non-instantiable objects (sky, vegetation, …​).

  • The two main cars overlap.

fig example semantic segmentation non instance aware
Figure 59. Example of a semantic segmentation that is non instance-aware

Figure 59 shows a partial segmentation of the image which is non-instance aware.

  • It is partially segmented because only some parts of the image have been labeled, for example, the cars. Other parts of the image are left grayed and unlabeled.

  • It is non-instance aware because pixels are labeled according only to their class. In this way, a big blob of pixels in the center of the image is assigned the same label because they correspond to the class car. Thus, the overlapping cars cannot be separated from this information alone.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
    "objects": {
      "0": {
        "name": "class0",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [425, 143, 424, 144, 423, 144, 422, 145, 422, 146, 419, 149, 419, 150, 418, 151, 418, 169, 421, 169, 421, 167, 422, 166, 440, 166, 441, 167, 441, 169, 445, 169, 445, 156, 446, 155, 447, 155, 447, 152, 445, 152, 444, 151, 444, 148, 442, 146, 442, 145, 440, 143],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [118, 112, 117, 113, 107, 113, 106, 114, 106, 115, 105, 116, 105, 117, 104, 118, 104, 120, 103, 121, 103, 124, 102, 125, 102, 129, 101, 130, 101, 134, 100, 135, 100, 137, 99, 138, 99, 167, 98, 168, 96, 168, 95, 169, 95, 174, 97, 176, 98, 176, 99, 177, 100, 177, 101, 178, 105, 178, 107, 180, 120, 180, 121, 181, 121, 184, 122, 185, 122, 186, 126, 190, 127, 190, 128, 191, 145, 191, 150, 186, 154, 186, 155, 187, 182, 187, 183, 188, 220, 188, 221, 189, 221, 196, 220, 197, 220, 199, 219, 200, 219, 201, 218, 202, 218, 205, 217, 206, 217, 221, 218, 222, 218, 224, 219, 225, 219, 248, 221, 250, 221, 251, 223, 253, 235, 253, 235, 247, 236, 246, 236, 242, 238, 240, 245, 240, 246, 241, 254, 241, 255, 242, 260, 242, 261, 243, 266, 243, 267, 244, 279, 244, 280, 245, 297, 245, 298, 246, 344, 246, 345, 245, 349, 245, 350, 246, 350, 250, 349, 251, 349, 259, 363, 259, 363, 258, 364, 257, 364, 252, 365, 251, 365, 231, 366, 230, 366, 221, 367, 220, 367, 214, 368, 213, 368, 198, 367, 197, 367, 181, 362, 176, 362, 175, 361, 174, 361, 173, 362, 172, 370, 172, 371, 171, 372, 171, 373, 170, 374, 170, 374, 165, 372, 163, 362, 163, 362, 167, 361, 168, 360, 168, 359, 167, 359, 161, 358, 160, 358, 154, 357, 153, 357, 152, 356, 151, 356, 150, 354, 148, 354, 147, 353, 146, 353, 145, 352, 144, 352, 142, 350, 140, 350, 139, 348, 137, 348, 136, 344, 132, 343, 132, 342, 131, 339, 131, 338, 130, 329, 130, 328, 129, 267, 129, 266, 130, 261, 130, 260, 131, 257, 131, 255, 133, 254, 133, 248, 139, 234, 125, 233, 125, 230, 122, 229, 122, 227, 120, 226, 120, 225, 119, 224, 119, 223, 118, 221, 118, 220, 117, 215, 117, 214, 116, 207, 116, 206, 115, 193, 115, 192, 114, 172, 114, 171, 113, 133, 113, 132, 112],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 0, -1, -1]
            }]
        }
        ...
      }
    }
}

The example shows the JSON objects corresponding to class car. It contains two contours:

  • A contour for the small car at the right of the image.

  • A contour for the center blob. It corresponds to two cars that are not distinguished in this type of non-instance aware semantic segmentation.

fig example semantic segmentation instance aware
Figure 60. Example of a semantic segmentation that is instance-aware

Figure 60 shows the instance-aware semantic segmentation of the partial segmentation example. In this case, the source PNG image contains different colors for each instance of the class car.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
{
    "objects": {
      "1": {
        "name": "instance0",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [425, 143, 424, 144, 423, 144, 422, 145, 422, 146, 419, 149, 419, 150, 418, 151, 418, 169, 421, 169, 421, 167, 422, 166, 440, 166, 441, 167, 441, 169, 445, 169, 445, 156, 446, 155, 447, 155, 447, 152, 445, 152, 444, 151, 444, 148, 442, 146, 442, 145, 440, 143],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, -1, -1, -1]
            }
          ]
        },
        ...
      },
      "2": {
        "name": "instance1",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [118, 112, 117, 113, 107, 113, 106, 114, 106, 115, 105, 116, 105, 117, 104, 118, 104, 120, 103, 121, 103, 124, 102, 125, 102, 129, 101, 130, 101, 134, 100, 135, 100, 137, 99, 138, 99, 167, 98, 168, 96, 168, 95, 169, 95, 174, 97, 176, 98, 176, 99, 177, 100, 177, 101, 178, 105, 178, 107, 180, 120, 180, 121, 181, 121, 184, 122, 185, 122, 186, 126, 190, 127, 190, 128, 191, 145, 191, 150, 186, 154, 186, 155, 187, 182, 187, 183, 188, 220, 188, 220, 184, 221, 183, 221, 181, 222, 180, 222, 179, 224, 177, 224, 176, 226, 174, 226, 173, 228, 171, 228, 170, 230, 168, 230, 167, 231, 166, 231, 165, 233, 163, 233, 162, 234, 161, 234, 160, 235, 159, 235, 158, 236, 157, 236, 156, 238, 154, 238, 153, 239, 152, 239, 151, 241, 149, 241, 148, 243, 146, 243, 145, 245, 143, 245, 142, 248, 139, 234, 125, 233, 125, 230, 122, 229, 122, 227, 120, 226, 120, 225, 119, 224, 119, 223, 118, 221, 118, 220, 117, 215, 117, 214, 116, 207, 116, 206, 115, 193, 115, 192, 114, 172, 114, 171, 113, 133, 113, 132, 112],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, -1, -1, -1]
            }
          ]
        },
        ...
      },
      "3": {
        "name": "instance2",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [267, 129, 266, 130, 261, 130, 260, 131, 257, 131, 255, 133, 254, 133, 249, 138, 249, 139, 246, 142, 246, 143, 244, 145, 244, 146, 242, 148, 242, 149, 240, 151, 240, 152, 239, 153, 239, 154, 237, 156, 237, 157, 236, 158, 236, 159, 235, 160, 235, 161, 234, 162, 234, 163, 232, 165, 232, 166, 231, 167, 231, 168, 229, 170, 229, 171, 227, 173, 227, 174, 225, 176, 225, 177, 223, 179, 223, 180, 222, 181, 222, 183, 221, 184, 221, 196, 220, 197, 220, 199, 219, 200, 219, 201, 218, 202, 218, 205, 217, 206, 217, 221, 218, 222, 218, 224, 219, 225, 219, 248, 221, 250, 221, 251, 223, 253, 235, 253, 235, 247, 236, 246, 236, 242, 238, 240, 245, 240, 246, 241, 254, 241, 255, 242, 260, 242, 261, 243, 266, 243, 267, 244, 279, 244, 280, 245, 297, 245, 298, 246, 344, 246, 345, 245, 349, 245, 350, 246, 350, 250, 349, 251, 349, 259, 363, 259, 363, 258, 364, 257, 364, 252, 365, 251, 365, 231, 366, 230, 366, 221, 367, 220, 367, 214, 368, 213, 368, 198, 367, 197, 367, 181, 362, 176, 362, 175, 361, 174, 361, 173, 362, 172, 370, 172, 371, 171, 372, 171, 373, 170, 374, 170, 374, 165, 372, 163, 362, 163, 362, 167, 361, 168, 360, 168, 359, 167, 359, 161, 358, 160, 358, 154, 357, 153, 357, 152, 356, 151, 356, 150, 354, 148, 354, 147, 353, 146, 353, 145, 352, 144, 352, 142, 350, 140, 350, 139, 348, 137, 348, 136, 344, 132, 343, 132, 342, 131, 339, 131, 338, 130, 329, 130, 328, 129],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, -1, -1, -1]
            }
          ]
        },
        ...
      }
    }
}

The example shows the JSON objects that represent each of the instances of the class car.

fig example semantic segmentation full scene segmentation non instance aware
Figure 61. Example of a full scene segmentation that is non instance-aware

Figure 61 shows a complete scene segmentation example. All pixels of the image have been labeled with a certain class value.

For simplification, a reduced dictionary has been used:

  • Car

  • Vegetation

  • Sky

  • Poles

  • Street

  • Miscellaneous

Note that each class is given a certain RGB value at the PNG image.

fig example semantic segmentation full scene segmentation instance aware
Figure 62. Example of a full scene segmentation that is instance-aware

Figure 62 shows the same complete scene segmentation example but with instance-aware coloring of instantiable classes. That means that pixels corresponding to class car are colored according to the instance they correspond to.

Both non-instance aware and instance-aware segmentations can be encoded together into a single ASAM OpenLABEL JSON payload. Class-level polygons, that is, non-instance aware, can be encoded as ASAM OpenLABEL objects with names that include the word class, and the type car. In addition, instance-aware shapes can be encoded as other ASAM OpenLABEL objects with names that include the name instance and the type car.

JSON example

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
{
    "objects": {
      "0": {
        "name": "class0",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [425, 143, 424, 144, 423, 144, 422, 145, 422, 146, 419, 149, 419, 150, 418, 151, 418, 169, 421, 169, 421, 167, 422, 166, 440, 166, 441, 167, 441, 169, 445, 169, 445, 156, 446, 155, 447, 155, 447, 152, 445, 152, 444, 151, 444, 148, 442, 146, 442, 145, 440, 143],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [118, 112, 117, 113, 107, 113, 106, 114, 106, 115, 105, 116, 105, 117, 104, 118, 104, 120, 103, 121, 103, 124, 102, 125, 102, 129, 101, 130, 101, 134, 100, 135, 100, 137, 99, 138, 99, 167, 98, 168, 96, 168, 95, 169, 95, 174, 97,...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 0, -1, -1]
            }
          ]
        },
        ...
      },
      "1": {
        "name": "instance0",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [425, 143, 424, 144, 423, 144, 422, 145, 422, 146, 419, 149, 419, 150, 418, 151, 418, 169, 421, 169, 421, 167, 422, 166, 440, 166, 441, 167, 441, 169, 445, 169, 445, 156, 446, 155, 447, 155, 447, 152, 445, 152, 444, 151, 444, 148, 442, 146, 442, 145, 440, 143],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, -1, -1, -1]
            }
          ]
        },
        "object_data_pointers": {
          "contour0": {
            "type": "poly2d",
            "frame_intervals": []
          }
        }
      },
      "2": {
        "name": "instance1",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [118, 112, 117, 113, 107, 113, 106, 114, 106, 115, 105, 116, 105, 117, 104, 118, 104, 120, 103, 121, 103, 124, 102, 125, 102, 129, 101, 130, 101, 134, 100, 135, 100, 137, 99, 138, 99, 167, 98, 168, 96, 168, 95, 169, 95, 174, 97, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, -1, -1, -1]
            }
          ]
        },
        ...
      },
      "3": {
        "name": "instance2",
        "type": "car",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [267, 129, 266, 130, 261, 130, 260, 131, 257, 131, 255, 133, 254, 133, 249, 138, 249, 139, 246, 142, 246, 143, 244, 145, 244, 146, 242, 148, 242, 149, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, -1, -1, -1]
            }
          ]
        },
        ...
      },
      "4": {
        "name": "class1",
        "type": "vegetation",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [381, 157, 381, 160, 384, 160, 385, 159, 383, 157],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [444, 145, 444, 147, 445, 148, 445, 151, 447, 151, 448, 152, 448, 149, 446, 149, 445, 148, 445, 145],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [2, 0, -1, -1]
            }, {
              "name": "contour2",
              "val": [376, 142, 375, 143, 373, 143, 373, 161, 375, 161, 377, 159, 379, 159, 379, 156, 378, 156, 376, 154, 376, 153, 374, 151, 374, 150, 375, 149, 378, 149, 379, 150, 379, 142],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [3, 1, -1, -1]
            }, {
              "name": "contour3",
              "val": [376, 138, 379, 138],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [4, 2, -1, -1]
            }, {
              "name": "contour4",
              "val": [361, 41, 361, 52, 362, 53, 362, 85, 363, 86, 363, 159, 366, 159, 367, 160, 371, 160, 371, 143, 369, 143, 368, 142, 368, 134, 369, 133, 375, 133, 376, 134, 379, 134, 376, 134, 375, 133, 375, 129, 376, 128, 384, 128, 385, 129, 385, 133, 384, 134, 381, 134, 384, 134, 385, 135, 385, 137, 384, 138, 381, 138, 384, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [5, 3, -1, -1]
            }, {
              "name": "contour5",
              "val": [3, 29, 1, 31, 0, 31, 0, 151, 3, 151, 4, 152, 7, 152, 8, 151, 21, 151, 21, 119, 22, 118, 22, 74, 23, 73, 23, 47, 24, 46, 24, 34, 23, 33, 21, 33, 21, 34, 23, 36, 23, 38, 22, 39, 21, 39, 20, 40, 19, 40, 19, 41, 16, 44, 14, 44, 10, 48, 9, 48, 8, 47, 8, 44, 7, 43, 7, 42, 8, 41, 8, 34, 7, 33, 7, 32, 6, 31, 6, 30, 5, 29],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [6, 4, -1, -1]
            }, {
              "name": "contour6",
              "val": [237, 17, 235, 19, 231, 19, 224, 26, 222, 26, 221, 27, 221, 29, 220, 30, 219, 30, 218, 31, 216, 31, 214, 33, 212, 33, 211, 34, 210, 34, 210, 36, 209, 37, 209, 39, 207, 41, 206, 41, 205, 42, 204, 42, 202, 40, 202, 36, 200, 34, 200, 33, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [7, 5, -1, -1]
            }, {
              "name": "contour7",
              "val": [65, 1, 64, 2, 63, 2, 62, 3, 62, 7, 60, 9, 58, 9, 57, 8, 57, 6, 56, 5, 56, 4, 55, 3, 53, 3, 51, 5, 50, 5, 50, 6, 49, 7, 49, 8, 48, 9, 48, 10, 47, 11, 47, 17, 46, 18, 44, 18, 43, 19, 34, 19, 32, 17, 31, 17, 30, 16, 26, 16, 25, 17, 25, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 6, -1, -1]
            }
          ]
        },
        ...
      },
      "5": {
        "name": "class2",
        "type": "sky",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [28, 29, 27, 30, 27, 33, 28, 32, 29, 32, 30, 31, 30, 30, 29, 30],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [31, 22, 29, 24, 30, 24, 31, 23],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [2, 0, -1, -1]
            }, {
              "name": "contour2",
              "val": [125, 0, 125, 22, 125, 21, 127, 19, 133, 19, 135, 21, 135, 23, 136, 23, 139, 26, 140, 26, 141, 27, 141, 28, 142, 28, 143, 29, 143, 30, 144, 30, 144, 23, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [3, 1, -1, -1]
            }, {
              "name": "contour3",
              "val": [0, 0, 0, 30, 1, 30, 3, 28, 5, 28, 7, 30, 7, 31, 8, 32, 8, 33, 9, 34, 9, 41, 8, 42, 8, 43, 9, 44, 9, 47, 10, 47, 14, 43, 16, 43, 18, 41, 18, 40, 19, 39, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 2, -1, -1]
            }
          ]
        },
        ...
      },
      "6": {
        "name": "class3",
        "type": "poles",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [385, 151, 385, 157],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [402, 145, 402, 151],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [2, 0, -1, -1]
            }, {
              "name": "contour2",
              "val": [394, 139, 394, 143, 395, 143, 396, 144, 396, 160, 396, 144, 397, 143, 398, 143, 398, 139],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [3, 1, -1, -1]
            }, {
              "name": "contour3",
              "val": [349, 137, 351, 139, 351, 140, 353, 142, 353, 143, 353, 137],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [4, 2, -1, -1]
            }, {
              "name": "contour4",
              "val": [446, 130, 446, 137, 445, 138, 444, 138, 443, 137, 441, 137, 440, 138, 440, 142, 443, 145, 443, 146, 443, 145, 444, 144, 445, 144, 446, 145, 446, 148, 448, 148, 449, 149, 449, 156, 449, 149, 450, 148, 452, 148, 452, 130],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [5, 3, -1, -1]
            }, {
              "name": "contour5",
              "val": [376, 129, 376, 133, 375, 134, 369, 134, 369, 142, 371, 142, 372, 143, 372, 162, 372, 143, 373, 142, 375, 142, 376, 141, 379, 141, 380, 142, 380, 161, 380, 142, 381, 141, 384, 141, 384, 139, 381, 139, 380, 138, 381, 137, 384, 137, 384, 135, 381, 135, 380, 134, 381, 133, 384, 133, 384, 129],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [8, 4, 6, -1]
            }, {
              "name": "contour6",
              "val": [375, 138, 376, 137, 379, 137, 380, 138, 379, 139, 376, 139],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [7, -1, -1, 5]
            }, {
              "name": "contour7",
              "val": [375, 134, 376, 133, 379, 133, 380, 134, 379, 135, 376, 135],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 6, -1, 5]
            }, {
              "name": "contour8",
              "val": [25, 20, 24, 21, 23, 21, 24, 22, 24, 23, 25, 24, 25, 46, 24, 47, 24, 73, 23, 74, 23, 118, 22, 119, 22, 151, 24, 151, 24, 110, 25, 109, 25, 72, 26, 71, 26, 30, 27, 29, 27, 26, 28, 25, 28, 24, 30, 22, 30, 21, 29, 21, 28, 20],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [9, 5, -1, -1]
            }, {
              "name": "contour9",
              "val": [363, 14, 362, 15, 360, 15, 359, 16, 359, 17, 358, 18, 354, 18, 353, 17, 352, 17, 351, 18, 347, 18, 347, 20, 352, 20, 353, 21, 358, 21, 359, 22, 359, 65, 360, 66, 360, 163, 361, 164, 361, 163, 362, 162, 362, 86, 361, 85, 361, 53, 360, 52, 360, 21, 361, 20, 362, 20, 365, 17, 367, 17, 368, 16, 371, 16, 371, 15, 370, 14, 369, 15, 366, 15, 365, 14],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [10, 8, -1, -1]
            }, {
              "name": "contour10",
              "val": [114, 0, 114, 22, 113, 23, 113, 66, 112, 67, 112, 102, 111, 103, 111, 112, 117, 112, 118, 111, 122, 111, 122, 61, 123, 60, 123, 29, 124, 28, 124, 0],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 9, -1, -1]
            }
          ]
        },
        ...
      },
      "7": {
        "name": "class4",
        "type": "miscellaneous",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [458, 166, 457, 167, 456, 167, 456, 168, 457, 169, 470, 169, 470, 166],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [370, 161, 371, 161],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [2, 0, -1, -1]
            }, {
              "name": "contour2",
              "val": [377, 160, 379, 160],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [3, 1, -1, -1]
            }, {
              "name": "contour3",
              "val": [363, 160, 363, 161, 366, 161, 366, 160],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [4, 2, -1, -1]
            }, {
              "name": "contour4",
              "val": [359, 160],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [5, 3, -1, -1]
            }, {
              "name": "contour5",
              "val": [381, 155, 381, 156, 383, 156, 386, 159, 384, 161, 383, 161, 385, 161, 386, 162, 388, 162, 389, 163, 390, 163, 391, 164, 396, 164, 397, 165, 403, 165, 404, 164, 404, 163, 405, 162, 403, 160, 401, 160, 400, 159, 398, 159, 397, 158, 397, 160, 396, 161, 395, 160, 395, 158, 394, 157, 391, 157, 390, 156, 387, 156, 386, 155, 386, 157, 385, 158, 384, 157, 384, 155],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [6, 4, -1, -1]
            }, {
              "name": "contour6",
              "val": [450, 154, 450, 156, 451, 156, 452, 157, 453, 157, 454, 158, 456, 158, 457, 159, 458, 159, 459, 160, 460, 160, 461, 161, 463, 161, 464, 162, 470, 162, 470, 159, 469, 159, 467, 157, 464, 157, 463, 158, 463, 160, 462, 161, 461, 161, 459, 159, 458, 159, 455, 156, 453, 156, 452, 155, 451, 155],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [7, 5, -1, -1]
            }, {
              "name": "contour7",
              "val": [448, 153, 448, 156],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [8, 6, -1, -1]
            }, {
              "name": "contour8",
              "val": [382, 150],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [9, 7, -1, -1]
            }, {
              "name": "contour9",
              "val": [375, 150, 375, 151, 377, 153, 377, 154, 378, 155, 379, 155, 379, 154, 378, 153, 378, 152, 379, 151, 378, 150],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [10, 8, -1, -1]
            }, {
              "name": "contour10",
              "val": [386, 149, 388, 149],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [11, 9, -1, -1]
            }, {
              "name": "contour11",
              "val": [403, 147, 403, 150, 406, 150, 407, 151, 410, 151, 411, 152, 414, 152, 415, 153, 417, 153, 417, 151, 418, 150, 418, 149, 416, 149, 415, 148, 410, 148, 409, 147],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [12, 10, -1, -1]
            }, {
              "name": "contour12",
              "val": [399, 143, 398, 144, 397, 144, 397, 149, 401, 149, 401, 146, 400, 145, 400, 143],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [13, 11, -1, -1]
            }, {
              "name": "contour13",
              "val": [392, 143, 391, 144, 391, 146, 392, 147, 391, 148, 395, 148, 395, 144, 394, 144, 393, 143],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [14, 12, -1, -1]
            }, {
              "name": "contour14",
              "val": [73, 128, 73, 130, 69, 134, 67, 134, 65, 136, 61, 136, 60, 135, 51, 135, 50, 136, 49, 136, 48, 137, 47, 137, 48, 138, 48, 142, 47, 143, 47, 147, 56, 147, 57, 148, 62, 148, 63, 149, 61, 151, 25, 151, 24, 152, 8, 152, 7, 153, 4, 153, 3, 152, 0, 152, 0, 166, 18, 166, 19, 167, 53, 167, 54, 168, 94, 168, 95, 167, 98, 167, 98, 148, 97, 147, 97, 140, 89, 140, 88, 139, 88, 137, 83, 137, 82, 136, 82, 134, 83, 133, 84, 133, 85, 132, 86, 132, 86, 131, 83, 131, 80, 134, 79, 134, 76, 131, 76, 130, 74, 128],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 13, -1, -1]
            }
          ]
        },
        ...
      },
      "8": {
        "name": "class5",
        "type": "street",
        "object_data": {
          "poly2d": [{
              "name": "contour0",
              "val": [360, 164, 360, 167, 361, 167, 361, 165],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [1, -1, -1, -1]
            }, {
              "name": "contour1",
              "val": [367, 161, 366, 162, 363, 162, 371, 162, 370, 162, 369, 161],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [2, 0, -1, -1]
            }, {
              "name": "contour2",
              "val": [379, 152, 379, 153],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [3, 1, -1, -1]
            }, {
              "name": "contour3",
              "val": [397, 150, 397, 157, 398, 158, 400, 158, 401, 159, 403, 159, 406, 162, 405, 163, 405, 164, 403, 166, 397, 166, 396, 165, 391, 165, 390, 164, 389, 164, 388, 163, 386, 163, 385, 162, 383, 162, 382, 161, 381, 161, 380, 162, 379, 161, ...],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [4, 2, -1, -1]
            }, {
              "name": "contour4",
              "val": [389, 149, 388, 150, 383, 150, 382, 151, 381, 151, 381, 154, 384, 154, 384, 151, 385, 150, 386, 151, 386, 154, 387, 155, 390, 155, 391, 156, 394, 156, 395, 157, 395, 149],
              "mode": "MODE_POLY2D_ABSOLUTE",
              "closed": true,
              "hierarchy": [-1, 3, -1, -1]
            }
          ]
        },
        ...
      }
    }
}

The example shows the non-instance aware and instance-aware objects together in the same JSON payload. The long polygon’s coordinate arrays have been customized for better visualization.

There are different encoding options in ASAM OpenLABEL. Absolute coordinates may be used to maintain some level of human readability. However, applying chain encoding mechanisms significantly compact the representation of the coordinates. In addition, the third option is to encode the entire source PNG image as a base64 payload and embed it into a ASAM OpenLABEL object.

8. Scenario tagging

8.1. Introduction

Tagging scenarios means enriching raw data with additional metadata. In the context of ASAM OpenLABEL these metadata are called tags. Tags provide high-level information related to the content of the scenario. Tags help to describe the scenario and act as keywords for searching and filtering scenarios within scenario databases. Tags refer to the whole container of information and do not include spatiotemporal, geometric, or other constructs utilized to isolate and localize the tagged concepts within the raw data.

Additionally, tags may also be relevant for training, validating, and testing specific machine-learning classification algorithms.

This chapter covers scenario tagging in detail, including the following topics:

  • The ontology providing the set of standardized ASAM OpenLABEL scenario tags.

  • The semantics and the logic governing the semantics of the ASAM OpenLABEL scenario tags.

  • The annotation schema to which valid ASAM OpenLABEL scenario tagging annotation instances should conform to.

  • The mechanisms that govern the reference to external knowledge repositories, such as ontologies, that organize and define the semantics of the labels.

Related deliverables

Related topics

8.1.1. Raw data sources for scenario tagging

Examples for raw data sources:

  • Test scenario files for simulation, for example OSC, M-SDL, Safety Pool SDL, Geoscenario, and other files describing simulation scenarios.

  • Sensor data streams, similarly to the multi-sensor data labeling use case. Examples are images, videos, and point clouds.

  • Valid ASAM OpenLABEL multi-sensor data labeling annotation instances can also be used as raw data to which additional scenario tagging metadata apply.

8.2. Tagging semantics

ASAM OpenLABEL assumes the use of an external knowledge repository, for example, an ontology, where the tags are organized, their semantics is defined, and values for tags are also defined, where relevant.

This section provides the following:

  • A description of the ASAM OpenLABEL scenario tagging ontology organizing the set of standardized tags for ASAM OpenLABEL.

  • A description of the mechanisms used to define the subset(s) of the ontology that are considered in a specific tagging instance, together with the logic that governs the interpretation of missing tags.

  • A description of the mechanisms used to assign valid tag values from the ontology and how to deal with the semantics of multiple values per single tag.

8.2.1. ASAM OpenLABEL tags

The ASAM OpenLABEL tags are the reference set of tags used to provide a summary of the content of a scenario which may be represented as a scenario definition in some Scenario Definition Language (SDL) or some sensor data.

Scenario tagging provides a summary of the scenario and is not intended to be used for identifying individual objects or actors within a scenario. Tagging at this level of detail is provided by ASAM OpenLABEL Multi-sensor data labeling.

The ASAM OpenLABEL tags are organized into three categories which can be used to describe different aspects of a scenario.

  • Operational Design Domain (ODD) tags: ODD tags describe the environmental conditions and road features present in a scenario, such as rainfall and junction. The ASAM OpenLABEL ODD tags are aligned with and share their definitions with the BSI PAS 1883 ODD Taxonomy [10].

  • Behavior tags: Behavior tags describe the types of road users and the behaviors exhibited by them in a scenario, such as a pedestrian who is walking.

  • Administration tags: Administration tags describe the qualities of a scenario which cannot or may not easily be derived from a scenario, such as the creation date of a scenario.

Related topics

Tag structure

Within the ODD and Behavior categories, and where applicable in the Administration category, tags are organized into a hierarchical structure with their position in the hierarchy reflecting the generality of a tag. Generality increases up the hierarchy, while specificity increases down the hierarchy, for example:

1
2
3
4
scenery
|-junction
|--roundabout
|---large roundabout

The example shows that large roundabout is at the lowest position in the hierarchy as it is the most specific form of roundabout. When moving up the hierarchy, the tags become less specific and more general.

This hierarchical relationship between tags is a fundamental concept as it makes it possible to draw inferences about scenario content, for example, if a scenario is tagged with large roundabout. Then the hierarchical relationships can be applied and it is possible to infer the more general statement that the scenario contains a roundabout. Going further, it is possible to infer the even more general statement that it contains a junction.

Applying inferencing in this way means that when tagging a scenario, only tags using the most specific tags that are applicable need to be applied. It becomes unnecessary to apply the more general tags which can be inferred.

This allows for more concise scenario tagging and efficient storage because unnecessary tags do not have to be stored. This bottom-up approach of selecting only the most specific tags which apply means only a minimal set of tags are needed to tag a scenario, and it is this approach that shall be used for ASAM OpenLABEL scenario tagging.

The minimal set does not include any tag that may be inferred from any other tag in the minimal set. The minimal set may be used to define the complete tag set for a scenario which includes all tags that belong to the minimal set and all those which may be inferred from the minimal set.

ASAM OpenLABEL scenario tagging ontology

The ASAM OpenLABEL tags and relations between them form the ASAM OpenLABEL scenario tagging ontology. The ASAM OpenLABEL scenario tagging ontology is available as a machine-readable form which uses the RDF turtle format, but which still manages to be human-readable.

The RDF turtle format is a W3C Recommendation and is a textual syntax that allows an RDF graph to be completely written in a compact and natural text form [20]. It provides levels of compatibility with the N-Triples [N-TRIPLES] format as well as the triple pattern syntax of the SPARQL W3C Recommendation [21].

The RDF turtle definition of the ASAM OpenLABEL scenario tagging ontology provides compatibility with a variety of RDF tools and toolkits that, in turn, offer inference and querying functionalities.

The tag hierarchy is replicated in the ASAM OpenLABEL scenario tagging ontology through the use of subclassing. The following is an excerpt from the ASAM OpenLABEL scenario tagging ontology which shows the definitions for the Intersection and Roundabout tags and how they are related to the more general Junction tag through a sub-class relationship, with the Odd tag being the root of the hierarchy for the ODD tags, and all tags being a sub-class of Tag.

RDF turtle example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
@base <https://openlabel.asam.net/V1-0-0/ontologies#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .

<Tag> a rdfs:Class ;
	rdfs:subClassOf rdfs:Class ;
	rdfs:label "Base Tag"@en ;
	rdfs:comment "The base tag"@en .

<Odd> a rdfs:Class ;
	rdfs:subClassOf <Tag> ;
	rdfs:label "ODD"@en ;
	rdfs:comment "Refer to BSI PAS-1883 Section 5"@en ;
	rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

<OddScenery> a rdfs:Class ;
    rdfs:subClassOf <Odd> ;
    rdfs:label "Junction"@en ;
    rdfs:comment "Refer to BSI PAS-1883 Section 5.1.a"@en ;
    rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

<SceneryJunction> a rdfs:Class ;
    rdfs:subClassOf <OddScenery> ;
    rdfs:label "Junction"@en ;
    rdfs:comment "Refer to BSI PAS-1883 Section 5.2.1.c"@en ;
    rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

<JunctionIntersection> a rdfs:Class ;
    rdfs:subClassOf <SceneryJunction> ;
    rdfs:label "Intersection"@en ;
    rdfs:comment "Refer to BSI PAS-1883 Section 5.2.4"@en ;
    rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

<JunctionRoundabout> a rdfs:Class ;
    rdfs:subClassOf <SceneryJunction> ;
    rdfs:label "Roundabout"@en ;
    rdfs:comment "Refer to BSI PAS-1883 Section 5.2.4"@en ;
    rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

Administration tags represent values which characterize a scenario rather than being things which a scenario contains and as such, they are defined as RDF properties which relate values to scenarios. The following excerpt from the ASAM OpenLABEL scenario tagging ontology is for the Scenario name administration tag, which defines a textual property that allows a scenario to be assigned a name.

RDF turtle example

1
2
3
4
5
<scenarioName> a rdfs:Property ;
    rdfs:label "Scenario name"@en ;
    rdfs:comment "The name of the scenario"@en ;
    rdfs:domain <Scenario> ;
    rdfs:range rdfs:Literal .
Tag naming convention

Tag names in the ontology shall be unique and to avoid ambiguity, the names of the tag classes follow a naming convention which is constructed using a prefix from the parent class name and a suffix from the child class. Pascal case is used for class names, whilst camel case is used for properties.

It shall be assumed that tags in a tagging instance are processed in a case-sensitive manner and therefore shall correspond exactly with ASAM OpenLABEL tag names.
Tagging instance ontology usage

When creating an ASAM OpenLABEL tagging instance, the instance shall reference the ASAM OpenLABEL scenario tagging ontology to give meaning to the tags used in the instance. This is achieved by referencing the ASAM OpenLABEL scenario tagging ontology from the ontologies section in the instance using the https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl, and by specifying the ontology to which the tags belong.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl"
            }
        },
        "tags": {
            "0": {
                "type": "SpecialStructurePedestrianCrossing",
                "ontology_uid": "0"
            }
        }
    }
}

The example shows how the ASAM OpenLABEL scenario tagging ontology is referenced and that it has been assigned 0 as the ontology identifier. Each ontology in the ontologies section shall be given a unique identifier. This is then referred to by the SpecialStructurePedestrianCrossing tag by its ontology_uid element to indicate that the tag is a member of the ontology.

Related topics

8.2.2. Tagging subsets

When tagging scenarios, it may be that not all of the available tags in an ontology are to be considered due to cost/time or technical constraints. In addition, they may be out of scope for the intended use of the tagged scenarios.

For example, if a tagging technician wants to tag a collected dataset for lane detection purposes, they only annotate lane-related features, such as the lane dimensions and lane marking types. Other types of features are ignored.

fig tagging boundary.drawio
Figure 63. Scenario tagging ontology

This means that a lot of information is absent in the tagged data. However, this information can be interesting and important for other users because different users may have various usage purposes on the same dataset. For example, an environment perception researcher might be more interested in junctions than lane numbers.

This ambiguity may lead to unexpected and inconsistent results when querying scenarios where the presence of a road feature is not desired and could result in the incorrect selection of scenarios which include that road feature but were not tagged with it.

The ambiguity means that it is not possible to determine from the tagging whether:

  • The relevant feature does not exist in the collected data.

Or

  • The relevant feature does exist in the collected data but it has not been tagged.

This uncertainty of the cause of an absent tag can lead to unexpected and inconsistent system responses. One typical use case is when querying datasets for the specific road features, for example, users are allowed to retrieve tagged data based on some conditions with regard to a specific tag value. In the above example, the tagging technician has only tagged lane-relevant attributes and there are no tags for the queried T-junction, even if a T-junction actually exists in the data. If the environment perception researcher wants to query all scenarios without a T-junction, the system can either return nothing because there is no information about junctions or return the tagged data because no T-junction is tagged.

That means that the problem is how should the absence of a tag be interpreted. Does it mean that the scenario does not contain that thing, or does it mean that it is not know as to whether the scenario contains that thing.

To resolve this uncertainty and ensure predictable behavior, ASAM OpenLABEL allows for the subset of tags that has been used in the tagging process to be specified. By knowing this subset, it can be used to assert that, if a tag is not present in the scenario tags but is present in the ontology subset, then it means that the scenario does not contain that thing. For tags outside the subset, it is unknown as to whether the scenario contains that thing or not. It is not valid to use tags outside of the bounds of the ontology subset.

Ontology subsets are defined by specifying the minimal set of tags which bound the subset, and this is termed the tagging boundary. As with scenario tagging, the tagging boundary shall not include any tags which can be inferred from other members of the tagging boundary.

Subsets can be defined either by inclusion or exclusion. The subset is formed from tags on the inside or the outside of the boundary. When deciding which method to use, it is suggested using whichever method results in the smallest set of tags for the boundary.

When using the inclusion method, the subset is defined as the empty set, in addition to the boundary tags, and the ascendants of the boundary tags.

When using the exclusion method, the subset is defined as the complete set of ontology tags minus the boundary tags and the descendants of the boundary tags.

If no boundary is specified, the entire set of tags from the ontology forms the subset.

Administration tags shall not be included in the tagging boundary and their absence from a tagging instance means that the information about the scenario for that tag is unknown.

In the tagging schema, the tagging boundary is specified for an ontology using the boundary_list element, and boundary_mode is used to determine whether the inclusion or exclusion method should be used be setting it to include or exclude respectively.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
    "openlabel": {
        "metadata": {
           "schema_version": "1.0.0"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl",
                "boundary_list": ["JunctionIntersection", "JunctionRoundabout"],
                "boundary_mode": "include"
            }
        },
        "tags": {
            "0": {
                "type": "JunctionIntersection",
                "ontology_uid": "0"
            }
        }
    }
}

The example shows a subset of the ASAM OpenLABEL scenario tagging ontology that only includes the tags for intersections and roundabouts. Considering this, it can be asserted that the tagged scenario contains an intersection and does not contain a roundabout. We can infer that it contains a junction but it is unknown as to whether the scenario contains a pedestrian crossing.

In implementation, scenario querying systems shall not allow the querying of scenarios with tags which fall outside of the boundary as the results are undefined as tags outside the boundary have no meaning.

Administration tags shall not need to be included in the tagging subset.

Related topics

8.2.3. Tagging extensions

There may be situations in which the ASAM OpenLABEL tags do not meet the precise needs of a tagging objective and additional tags are needed. ASAM OpenLABEL makes it possible to extend the set of tags used for tagging. Additional tags may be added independent of the ASAM OpenLABEL scenario tagging ontology or can be used to extend it.

For example, in the UK, there are different types of pedestrian crossings, such as a Toucan Crossing which is a crossing for pedestrians and cycles. ASAM OpenLABEL allows the ASAM OpenLABEL Pedestrian Crossing tag to be extended with this more specific type of crossing.

RDF turtle example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix ol: <https://openlabel.asam.net/V1-0-0/ontologies#> .
@prefix ex: <https://example.org/ontologies/v1#> .

ex:ToucanCrossing a rdfs:Class ;
    rdfs:subClassOf ol:SpecialStructurePedestrianCrossing ;
    rdfs:label "Toucan Crossing" ;
    rdfs:comment "A type of crossing designed for both pedestrians and cyclists" ;
    rdfs:seeAlso "https://docs.example.org/ontologies/v1#ToucanCrossing" .

The example shows how to create a new ontology which references the ASAM OpenLABEL scenario tagging ontology and defines a new class of ToucanCrossing which is a subclass of the ASAM OpenLABEL Pedestrian crossing tag (SpecialStructurePedestrianCrossing).

Ontologies shall be defined using the RDF turtle format and shall be assigned a URI so that they can be uniquely identified. The URI should resolve to a resource from where the RDF turtle definition can be downloaded.

The class name of new tags should follow the tag naming convention described elsewhere in this chapter.

When creating a new tag, the following properties shall be defined:

  • rdfs:label: Should be a short, human friendly name for the tag.

  • rdfs:comment: Should be a short description conveying the meaning of the tag.

  • rdfs:seeAlso: Should be a URL to a resource that contains a definition of the tag.

The following example shows how a new ontology shall be referenced from the ontologies section of the ASAM OpenLABEL instance. The new ToucanCrossing tag may be added then.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl"
            },
            "1": {
                "uri": "https://example.org/ontologies/v1"
            }
        },
        "tags": {
            "0": {
                "type": "ToucanCrossing",
                "ontology_uid": "1"
            }
        }
    }
}

A new tag which does not extend the ASAM OpenLABEL scenario tagging ontology shall be defined such that the new tag is a subclass of the base rdfs class and is therefore not related to the ASAM OpenLABEL scenario tagging ontology.

RDF turtle example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix ol: <https://openlabel.asam.net/V1-0-0/ontologies#> .
@prefix ex: <https://example.org/ontologies/v1#> .

ex:ScenarioStatus a rdfs:Class ;
    rdfs:subClassOf rdfs:Class ;
    rdfs:label "Scenario Status" ;
    rdfs:comment "Internal status code" ;
    rdfs:seeAlso "https://docs.example.org/ontologies/v1#ScenarioStatus" .
Enumerations should be avoided as they are not extensible, and values should be defined as subclasses instead.

Rules

  • Ontologies shall have a unique URI so that they can be uniquely identified.

Related topics

8.2.4. Tagging values

For some classes of scenario content, it is desirable to be able to include quantitative values to specify the scope for the class. For example, when tagging a scenario for rainfall, the amount of rain might be specified. There are several tags like this within the ASAM OpenLABEL scenario tagging ontology that can have values specified, and the ontology contains property definitions to support this.

The following example shows the definition for the Rainfall tag and its associated Rainfall Intensity property. Note that the domain of the property is the tag. Tag properties are named by convention as being the associated tag name converted to camel case with the suffix 'Value' appended.

RDF turtle example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<EnvironmentWeather> a rdfs:Class ;
    rdfs:subClassOf <OddEnvironment> ;
    rdfs:label "Weather"@en ;
    rdfs:comment "Refer to BSI PAS-1883 Section 5.3.1"@en ;
    rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

<WeatherRain> a rdfs:Class ;
    rdfs:subClassOf <EnvironmentWeather> ;
    rdfs:label "Rainfall"@en ;
    rdfs:comment "Refer to BSI PAS-1883 Section 5.3.1.2"@en ;
    rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

<weatherRainValue> a rdfs:Property ;
	rdfs:label "Rainfall Intensity (mm/h)"@en ;
	rdfs:comment "Refer to BSI PAS-1883 Section 5.3.1.2"@en ;
	rdfs:domain <WeatherRain> ;
	rdfs:range xsd:decimal ;
	rdfs:seeAlso "https://www.bsigroup.com/en-GB/CAV/pas-1883" .

Within an ASAM OpenLABEL instance, values associated with a tag are specified by adding the tag_data element to the tag element.

The following example specifies rainfall with a value of 3.1. Note that the metric and units for the value are specified in the ontology and not repeated in the instance.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
    "tags": {
        "0": {
            "type": "WeatherRain",
            "ontology_uid": "0",
            "tag_data": {
                "num": [{
                    "type": "value",
                    "val": "3.1"
                }]
            }
        }
    }
}

Refer to Data types (generic) for more detail on the tag_data element and the different types of data that are supported.

Where a tag can have a value specified it should not be mandatory as it may be that the value is not known or cannot be determined. For example, it may be possible to detect that a scenario contains rainfall but not the amount of rain, in which case it is still desirable to tag the scenario for rainfall but not to specify the amount.

Similarly, when querying scenarios, it might be desirable to include or exclude scenarios containing rain, in which case they would query using the rainfall tag without specifying an amount.

Related topics

8.2.5. Tagging multiple values

In many cases, due to the variability of the natural world it is not appropriate to use exact values for tags and it is necessary to specify a range or multiple values.

Repeating a tag in an ASAM OpenLABEL instance is not allowed, nor is it necessary with the ability to specify multiple values.

Ranges are particularly suitable for describing quantities measured using non-integer values, such as rainfall and lane widths. In this case, variability in the measured value over time or space is likely, as is an imprecise measurement.

A range can be specified by indicating the upper and lower bounds of the set of possible values as in the following example:

JSON example

1
2
3
4
5
6
7
8
9
{
    "tag_data": {
        "vec": [{
            "type": "range",
            "val": [3.4, 3.7]
            }
        ]
    }
}

It is also possible to specify a range with only the upper or lower bound, as in the two following examples, in which case the limit on the possible range of values is determined by the definition of the tag.

JSON example

1
2
3
4
5
6
7
8
{
    "tag_data": {
        "num": [{
            "type": "min",
            "val": 1.2
        }]
    }
}

The example shows a range specified with only a lower bound.

JSON example

1
2
3
4
5
6
7
8
{
    "tag_data": {
        "num": [{
            "type": "max",
            "val": 20.1
        }]
    }
}

The example shows a range specified with an upper bound.

For situations where there is a discontinuous range, it is possible to specify this using multiple ranges as follows.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
{
    "tag_data": {
        "vec": [{
            "type": "range",
            "val": [3.4, 3.7]
            }, {
            "type": "range",
            "val": [3.9, 4.1]
        }]
    }
}

For tags where a discrete value is appropriate, such as Number of lanes, multiple values can be supplied together as a set for the same tag, as shown in the following example:

JSON example

1
2
3
4
5
6
7
8
{
    "tag_data": {
        "vec": [{
            "type": "values",
            "val": [2, 3]
        }]
    }
}

The example shows a set of values.

Rule

  • Repeating a tag in a ASAM OpenLABEL instance shall not be allowed.

  • Specified ranges should not overlap.

Related topics

8.3. Annotation schema

The annotation schema defines the structure of annotations, data types, and conventions needed to unambiguously interpret the annotations. The annotation data format specifies how the annotation data is encoded for storage in computer files.

The annotation schema is described and formatted as a JSON schema. It defines the shape which valid JSON annotation instances shall conform to. The structure of the ASAM OpenLABEL annotation schema is serialized in the ASAM OpenLABEL JSON schema file. The annotation schema itself conforms to the JSON schema Draft 7 specification [13].

The annotation schema of ASAM OpenLABEL addresses the following general features related to scenario tagging:

  • Tagging different ODD, behavioral, and administrative characteristics of the raw data instance.

  • Defining a tagging subset that determines the subset of tags relevant for the specific tagging instance.

  • Discrete values and value range definitions for specific tags.

  • Linkage to ontologies and external resources.

  • Customizable and optional fields.

The annotation schema defines three main characteristic aspects of annotation data:

  • Structure: How data is organized, using hierarchies and key-value dictionaries.

  • Types: Primitive data types for key-value items.

  • Conventions: Documented interpretation of data values.

The annotation schema for scenario tagging follows the same principles of the annotation schema for multi-sensor data labeling, meaning JSON and JSON schema, as described in chapter Multi-sensor data labeling.

8.4. Structure

The ASAM OpenLABEL annotation schema for scenario tagging is structured as a dictionary and can be described from top to bottom. This section contains diagrams intended to visualize the structure. The details of the structure can all be consulted at the ASAM OpenLABEL JSON schema file.

Any ASAM OpenLABEL JSON data shall have a root key named openlabel. Its value is a dictionary containing the rest of the structure as described in the next sections. The version of the schema shall be defined inside the metadata structure, under the key schema_version. All other entries are optional.

JSON example

1
2
3
4
5
6
7
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        }
    }
}

The following example shows a JSON payload corresponding to the first level items inside the root openlabel value, which are related to scenario tagging.

JSON example

1
2
3
4
5
6
7
{
    "openlabel": {
        "tags": { ... },
        "metadata": { ... },
        "ontologies": { ... },
    }
}

For scenario tagging, the ASAM OpenLABEL structure defines dictionaries for the tags. Each entry of the dictionary is a key-value pair where the key is a unique identifier of the tag. The value is the container of its static information. Supporting structures define the used ontologies to provide linkage to external semantic definitions of terms.

fig openlabel format tagging.drawio
Figure 64. ASAM OpenLABEL tagging structure

Figure 64 shows the ASAM OpenLABEL data structure for scenario tagging.

Class

openlabel

The OpenLABEL root JSON object, which contains all other JSON objects.

Additional properties:

false

Type:

object

Diagram
Figure 65. Diagram of the openlabel class
Table 25. Properties of the openlabel class
Name Type Required Additional properties Reference Description

actions

object

false

#/definitions/action

This is the JSON object of OpenLABEL actions. Action keys are strings containing numerical UIDs or 32 bytes UUIDs.

contexts

object

false

#/definitions/context

This is the JSON object of OpenLABEL contexts. Context keys are strings containing numerical UIDs or 32 bytes UUIDs.

coordinate_systems

#/definitions/coordinate_systems

This is a JSON object which contains OpenLABEL coordinate systems. Coordinate system keys can be any string, for example, a friendly coordinate system name.

events

object

false

#/definitions/event

This is the JSON object of OpenLABEL events. Event keys are strings containing numerical UIDs or 32 bytes UUIDs.

frame_intervals

array

#/definitions/frame_interval

This is an array of frame intervals.

frames

object

false

#/definitions/frame

This is the JSON object of frames that contain the dynamic, timewise, annotations. Keys are strings containing numerical frame identifiers, which are denoted as master frame numbers.

metadata

true

#/definitions/metadata

This JSON object contains information, that is, metadata, about the annotation file itself.

objects

object

false

#/definitions/object

This is the JSON object of OpenLABEL objects. Object keys are strings containing numerical UIDs or 32 bytes UUIDs.

ontologies

#/definitions/ontologies

This is the JSON object of OpenLABEL ontologies. Ontology keys are strings containing numerical UIDs or 32 bytes UUIDs. Ontology values may be strings, for example, encoding a URI. JSON objects containing a URI string and optional lists of included and excluded terms.

relations

object

false

#/definitions/relation

This is the JSON object of OpenLABEL relations. Relation keys are strings containing numerical UIDs or 32 bytes UUIDs.

resources

#/definitions/resources

This is the JSON object of OpenLABEL resources. Resource keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource values are strings that describe an external resource, for example, file name, URLs, that may be used to link data of the OpenLABEL annotation content with external existing content.

streams

#/definitions/streams

This is a JSON object which contains OpenLABEL streams. Stream keys can be any string, for example, a friendly stream name.

tags

object

false

#/definitions/tag

This is the JSON object of tags. Tag keys are strings containing numerical UIDs or 32 bytes UUIDs.

8.5. Tags

Tags are used to provide information about a certain data file, which may be specified at the metadata entry in the JSON file.

JSON example

1
2
3
4
5
6
7
8
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0",
            "tagged_file": "../resources/scenarios/scenario.file"
        }
    }
}

Similarly to object_data, tags may have tag_data in the form of generic data types (that is, num, vec, text, boolean). See Data types (generic) for details.

fig openlabel format attributes generic.drawio
Figure 66. ASAM OpenLABEL attributes

Class

tag

A tag is a special type of label that can be attached to any type of content, such as images, data containers, folders. In ASAM OpenLABEL the main purpose of a tag is to allow adding metadata to scenario descriptions.

Additional properties:

true

Type:

object

Diagram
Figure 67. Diagram of the tag class
Table 26. Properties of the tag class
Name Type Required Reference Description

ontology_uid

string

true

This is the UID of the ontology where the type of this tag is defined.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

tag_data

#/definitions/tag_data

Tag data can be a JSON object or a string which contains additional information about this tag.

type

string

true

The type of a tag defines the class the tag corresponds to.

tag_data

Tag data can be a JSON object or a string which contains additional information about this tag.

Diagram
Figure 68. Diagram of the tag data class

JSON example

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
{
	"tags" : {
        "0" : {
            "type" : "RoadTypeMotorway",
            "ontology_uid" : "0"
        },
        "1" : {
            "type" : "LaneSpecificationLaneCount",
            "ontology_uid" : "0",
            "tag_data" : {
                "vec" : [{
                        "type" : "values",
                        "val" : ["2", "3"]
                    }
                ]
            }
        },
        "2" : {
            "type" : "LaneSpecificationDimensions",
            "ontology_uid" : "0",
            "tag_data" : {
                "vec" : [{
                        "type" : "range",
                        "val" : ["3.4", "3.7"]
                    }, {
                        "type" : "range",
                        "val" : ["3.9", "4.1"]
                    }
                ]
            }
        },
        "3" : {
            "type" : "WeatherRain",
            "ontology_uid" : "0",
            "tag_data" : {
                "num" : [{
                        "type" : "min",
                        "val" : "1.2"
                    }
                ]
            }
        },
        "4" : {
            "type" : "MotionWalk",
            "ontology_uid" : "0"
        },
        "5" : {
            "type" : "MotionDrive",
            "ontology_uid" : "0"
        },
        "6" : {
            "type" : "scenarioUniqueReference",
            "ontology_uid" : "0",
            "tag_data" : {
                "text" : [{
                        "type" : "value",
                        "val" : "{02ed611e-a376-11eb-973f-b818cf5bef8c}"
                    }
                ]
            }
        },
        "7" : {
            "type" : "scenarioName",
            "ontology_uid" : "0",
            "tag_data" : {
                "text" : [{
                        "type" : "value",
                        "val" : "FSD01726287 Roundabout first exit"
                    }
                ]
            }
        },
        "9" : {
            "type" : "scenarioVersion",
            "ontology_uid" : "0",
            "tag_data" : {
                "text" : [{
                        "type" : "value",
                        "val" : "1.0"
                    }
                ]
            }
        },
        "10" : {
            "type" : "projectId",
            "ontology_uid" : "1",
            "tag_data" : {
                "text" : [{
                        "type" : "value",
                        "val" : "123456"
                    }
                ]
            }
        },
        "12" : {
            "type" : "ToucanCrossing",
            "ontology_uid" : "2"
        },
        "13" : {
            "type" : "RainDropletSize",
            "ontology_uid" : "2",
            "tag_data" : {
                "num" : [{
                        "type" : "value",
                        "val" : "0.2"
                    }
                ]
            }
        }
    }
}

8.6. Ontologies

Tags are particularly sensitive to precise definitions as they are mainly used for searching. As a consequence, tags may be defined in specific ontologies.

Class

ontologies

This is the JSON object of OpenLABEL ontologies. Ontology keys are strings containing numerical UIDs or 32 bytes UUIDs. Ontology values may be strings, for example, encoding a URI. JSON objects containing a URI string and optional lists of included and excluded terms.

Additional properties:

false

Type:

object

Diagram
Figure 69. Diagram of the ontologies class

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
	"openlabel" : {
		"metadata" : {
			"schema_version" : "1.0.0",
			"tagged_file" : "../resources/scenarios/some_scenario_file"
		},
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl"
            }
        },
		"tags" : {
			"0" : {
				"type" : "RoundaboutDouble",
				"ontology_uid" : "0"
				}
			}
		}
	}
}

The example shows the referenced URL of an ontology https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl with id 0. Within tag 0 this referenced ontology is defined for semantic verification by using the ontology_uid key to reference on the ontology by using the id 0 value.

Tag subset inclusion and exclusion may be defined for each ontology.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
{
	"openlabel" : {
		"metadata" : {
			"schema_version" : "1.0.0"
		},
		"ontologies" : {
			"0" : {
				"uri" : "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl",
                "boundary_list": ["RoadTypeMotorway", "RoadTypeMinor"],
                "boundary_mode": "include"
			},
			"1" : {
                "uri" : "https://mycompany/ontologies/v1",
                "boundary_list": ["JunctionRoundabout"],
                "boundary_mode": "exclude"
            }
		}
	}
}

The example shows tagging subset inclusion for the tags RoadTypeMotorway and RoadTypeMinor by using the boundary_mode inclusion key for the first ontology. It also shows tagging subset exclusion for the tag JunctionRoundabout, by using the boundary_mode exclusion key.

8.7. Data types (generic)

ASAM OpenLABEL defines geometric and non-geometric (generic) data types, which all together provide the flexibility needed to represent any kind of information on labels or tags.

Non-geometric (generic) tag_data are primitive data types like the following:

  • Boolean: boolean

  • Number: May be a single number or a floating-point precision: num

  • Text: text

  • Vector. A vector is an array of numbers or strings.: vec

These are attributes that can be used freely to express any property of the tag.

Rules

  • For scenario tagging, only non-geometric (generic) data types are considered.

  • tags shall have a unique identifier.

  • tag_data shall have a unique name.

Related topics

8.7.1. Boolean

A Boolean object_data. It has the same properties as the other generic attributes.

Class

boolean

A boolean.

Additional properties:

true

Type:

object

Diagram
Figure 70. Diagram of the boolean class
Table 27. Properties of the boolean class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies how the boolean shall be considered. In this schema the only possible option is as a value.

val

boolean

true

The boolean value.

JSON example

1
2
3
4
5
6
{
"boolean": [{
    "name": "visible",
    "val": true
}]
}

8.7.2. Number

The most basic attribute or generic data type is num. It defines a floating-point number and is defined by a name key, and val key. Optional properties are coordinate_system and other nested object_data as attributes.

Class

num

A number.

Additional properties:

true

Type:

object

Diagram
Figure 71. Diagram of the num class
Table 28. Properties of the num class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies whether the number shall be considered as a value, a minimum, or a maximum in its context.

val

number

true

The numerical value of the number.

JSON example

1
2
3
4
5
6
{
"num": [{
    "name": "height_m",
    "val": 1.98
}]
}

The value of the key num is an array. Any element, for example object, may have multiple object_data entries of num. The same principle applies to all other object_data.

Nesting generic data types, for example text, into other generic data type, for example num, can be done infinitely. ASAM OpenLABEL does not limit the hierarchy depth.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
"num": [{
    "name": "height_m",
    "val": 1.98,
    "coordinate_system": "WORLD",
    "attributes": {
        "num": [{
            "name": "confidence",
            "val": 0.98
        }]
    },
    "custom_prop1": "SomeValue",
    "custom_prop2": 0.99
}]
}

8.7.3. Text

A text is a string or chain of characters which represent textual information. It has the same properties as the other generic attributes.

Class

text

A text.

Additional properties:

true

Type:

object

Diagram
Figure 72. Diagram of the text class
Table 29. Properties of the text class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies how the text shall be considered. The only possible option is as a value.

val

string

true

The characters of the text.

JSON example

1
2
3
4
5
6
{
"text": [{
    "name": "license plate",
    "val": "8440CMN"
}]
}

8.7.4. Vector

Arrays of text or num can be created under vec. It has the same properties as the other generic attributes.

Class

vec

A vector (list) of numbers or strings.

Additional properties:

true

Type:

object

Diagram
Figure 73. Diagram of the vec class
Table 30. Properties of the vec class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies whether the vector shall be considered as a descriptor of individual values or as a definition of a range.

val

array

true

The numerical values of the vector (list) of numbers.

JSON example

1
2
3
4
5
6
{
"vec": [{
    "name": "scores",
    "val": [0.98, 0.76, 0.98]
}]
}

The example shows an array of numbers.

JSON example

1
2
3
4
5
6
{
"vec": [{
    "name": "locations",
    "val": ["Madrid", "Paris", "Rome"]
}]
}

The example shows an array of strings.

8.8. Use cases

8.8.1. Scenario tagging example

The following example shows an ASAM OpenLABEL instance which has been used to tag an OpenSCENARIO 1.x file.

fig crossroads scenario
Figure 74. Crossroad scenario

The example contains ODD tags summarizing the road features present in the scenario, behavior tags for the car and bus and their driving behavior, as well as administration tags describing scenario ID, name, version, owner, and license.

The scenario is contained in a separate file scenario123.osc and is referenced using the tagged_file element.

JSON example

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0",
            "tagged_file": "../resources/scenarios/scenario123.osc"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl",
                "boundary_list": ["DrivableAreaSigns", "DrivableAreaEdge","DrivableAreaSurface"],
                "boundary_mode": "exclude"
            }
        },
        "tags": {
            "0": {
                "type": "RoadTypeMinor",
                "ontology_uid": "0"
            },
            "1": {
                "type": "HorizontalStraights",
                "ontology_uid": "0"
            },
            "3": {
                "type": "LaneTypeTraffic",
                "ontology_uid": "0"
            },
            "4": {
                "type": "ZoneSchool",
                "ontology_uid": "0"
            },
            "5": {
                "type": "IntersectionCrossroad",
                "ontology_uid": "0"
            },
            "6": {
                "type": "SpecialStructurePedestrianCrossing",
                "ontology_uid": "0"
            },
            "7": {
                "type": "WeatherWind",
                "ontology_uid": "0",
                "tag_data": {
                    "vec": [{
                        "type": "range",
                        "val": ["10", "25"]
                        }
                    ]
                }
            },
            "8": {
                "type": "IlluminationDay",
                "ontology_uid": "0"
            },
            "9": {
                "type": "FixedStructureBuilding",
                "ontology_uid": "0"
            },
            "10": {
                "type": "FixedStructureVegetation",
                "ontology_uid": "0"
            },
            "10": {
                "type": "TravelDirectionRight",
                "ontology_uid": "0"
            },
            "11": {
                "type": "VehicleCar",
                "ontology_uid": "0"
            },
            "12": {
                "type": "VehicleBus",
                "ontology_uid": "0"
            },
            "13": {
                "type": "MotionDrive",
                "ontology_uid": "0"
            },
            "15": {
                "type": "scenarioUniqueReference",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "c133241e-f325-11eb-a72f-e817714ba02d"
                    }]
                }
            },
            "16": {
                "type": "scenarioName",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "Scenario 123"
                    }]
                }
            },
            "17": {
                "type": "scenarioVersion",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "1.0"
                    }]
                }
            },
            "18": {
                "type": "ownerURL",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "https://example.com"
                    }]
                }
            },
            "19": {
                "type": "licenseURI",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "https://example.org/licenses/publicdomain/"
                    }]
                }
            }
        }
    }
}

8.8.2. Ontology extension

Below is an example of how the ASAM OpenLABEL scenario tagging ontology may be extended to add a new administration tag to record the project that a scenario was created for.

In the ASAM OpenLABEL scenario tagging ontology, administration tags are generally defined as properties which apply to the Scenario class. To add a new administration tag, a new property shall be defined.

RDF turtle example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@prefix ex: <https://example.org/ontologies/v1/> .
@prefix asam: <https://openlabel.asam.net/V1-0-0/ontologies/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .

ex:projectReference a rdfs:Property ;
    rdfs:label "Project Reference"@en ;
    rdfs:comment "The project which the scenario was created for"@en ;
    rdfs:domain asam:Scenario ;
    rdfs:range rdfs:Literal .

The example shows how a new property, projectReference, is defined in an RDF turtle file.

Note the following:

  • The ASAM OpenLABEL scenario tagging ontology is referenced using the asam: prefix.

  • A new namespace specifies ex: for the ontology extension.

  • The name for the new property follows the convention of using camel case.

Having created the ontology extension, it can be used from a tagging instance by referencing the new ontology from the ontologies element.

The new ontology should be made available for download from the specified URI to enable users of the tagging instance to process the file.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0",
            "tagged_file": "../resources/scenarios/scenario.osc"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl"
            },
            "1": {
                "uri": "https://example.org/ontologies/v1"
            }
        },
        "tags": {
            "0": {
                "type": "projectReference",
                "ontology_uid": "1",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "X0002465"
                    }]
                }
            }
        }
    }
}

The example shows how the new projectReference tag is used to tag a scenario with the project reference X0002465. The ontology_uid refers to the ontology extension.

8.8.3. Embedded scenario

The following is an example of how a scenario definition can be embedded in a tagging instance as an alternative to being stored in a separate file in order to aid portability.

When embedding a scenario definition, the scenarioDefinitionLanguageURI tag should be used to specify which scenario definition language has been used for the scenario definition.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl"
            }
        },
        "tags": {
            "0": {
                "type": "RoadTypeMinor",
                "ontology_uid": "0"
            },
            "1": {
                "type": "JunctionRoundabout",
                "ontology_uid": "0"
            },
            "2": {
                "type": "LaneSpecificationLaneCount",
                "ontology_uid": "0",
                "tag_data": {
                    "vec": [{
                        "type": "values",
                        "val": [1, 2]
                        }
                    ]
                }
            },
            "3": {
                "type": "scenarioDefinitionLanguageURI",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "https://example.org/languages/SDL/1.0/"
                    }]
                }
            },
            "4": {
                "type": "scenarioDefinition",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "def ra1 as Roundabout; def r1, r2, r3 as Road.Minor; ra1.Exits = [r1,r2,r3]; r1.Lanes = 2;"
                    }]
                }
            }
        }
    }
}

8.8.4. Scenario instance in Turtle

When processing a tagging instance, the ASAM OpenLABEL scenario tagging ontology may be used to help create a model of the scenario which can be loaded into a reasoning engine in order to determine inferred tags.

JSON example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
{
    "openlabel": {
        "metadata": {
            "schema_version": "1.0.0",
            "tagged_file": "../resources/scenarios/scenario123.osc"
        },
        "ontologies": {
            "0": {
                "uri": "https://openlabel.asam.net/V1-0-0/ontologies/openlabel_ontology_scenario_tags.ttl"
            }
        },
        "tags": {
            "0": {
                "type": "RoundaboutNormal",
                "ontology_uid": "0"
            },
            "1": {
                "type": "WeatherRain",
                "ontology_uid": "0",
                "tag_data": {
                    "num": [{
                        "type": "value",
                        "val": "1.2"
                        }
                    ]
                }
            },
            "16": {
                "type": "scenarioName",
                "ontology_uid": "0",
                "tag_data": {
                    "text": [{
                        "type": "value",
                        "val": "Scenario 123"
                    }]
                }
            }
        }
    }
}

The example shows a tagging instance of a scenario and is followed by a corresponding model of the scenario in RDF turtle format.

RDF turtle example

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
@prefix asam: <https://openlabel.asam.net/V1-0-0/ontologies/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix data: <https://example.org/data/> .

data:tagRoundabout a asam:RoundaboutNormal .

data:tagRain a asam:WeatherRain ;
    asam:weatherRainValue 1.2 .

data:scenario123 a asam:Scenario ;
    asam:scenarioName "Scenario 123" ;
    asam:hasTag data:tagRoundabout ;
    asam:hasTag data:tagRain .

In this RDF turtle definition, the scenario is defined as an instance of the scenario class which is defined in the ASAM OpenLABEL scenario tagging ontology and represents the domain of all scenarios. The scenario instance is also assigned the value Scenario 123 specified in the tagging instance using the Administration tag scenarioName property.

The tags from the tagging instance are defined in RDF turtle using the ODD tag classes in the ontology. In this example there is a tag instance tagRain which is of tag type WeatherRain with a rainfall value of 1.2, and a tag instance tagRoundabout of type RoundaboutNormal.

In order to associate these tag instances with the scenario, the ontology defines a hasTag property which is used for making the association.

This RDF turtle definition can be loaded into a reasoning engine to determine the inferred tags, such as does the scenario contain a Junction.

9. References

9.1. Classes

action

An action is a type of element intended to describe temporal situations with semantic load as a certain activity happening in real life, such as crossing-zebra-cross, standing-still, playing-guitar. As such, actions are defined by their type, the frame intervals in which the action happens, and any additional action data, for example, numbers, booleans, text as attributes of the actions.

Additional properties:

false

Type:

object

Diagram
Figure 75. Diagram of the action class
Table 31. Properties of the action class
Name Type Required Reference Description

action_data

#/definitions/action_data

Additional data to describe attributes of the action.

action_data_pointers

#/definitions/element_data_pointers

This is a JSON object which contains OpenLABEL element data pointers. Element data pointer keys shall be the "name" of the element data this pointer points to.

frame_intervals

array

#/definitions/frame_interval

The array of frame intervals where this action exists or is defined.

name

string

true

Name of the action. It is a friendly name and not used for indexing.

ontology_uid

string

This is the UID of the ontology where the type of this action is defined.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

type

string

true

The type of an action defines the class the action corresponds to.

action_data

Additional data to describe attributes of the action.

Additional properties:

false

Type:

object

Diagram
Figure 76. Diagram of the action data class
Table 32. Properties of the action data class
Name Type Reference Description

boolean

array

#/definitions/boolean

List of "boolean" that describe this action.

num

array

#/definitions/num

List of "num" that describe this action.

text

array

#/definitions/text

List of "text" that describe this action.

vec

array

#/definitions/vec

List of "vec" that describe this action.

area_reference

An area reference is a JSON object which defines the area of a set of 3D line segments by means of defining the indexes of all lines which outline the area. Note that coplanar 3D lines are assumed.

Additional properties:

true

Type:

object

Diagram
Figure 77. Diagram of the area reference class
Table 33. Properties of the area reference class
Name Type Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

reference_type

string

This is the type of the reference as a string with the name of the element data (e.g. line_reference)

val

array

The array of indexes of the references of type reference_type.

attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

Additional properties:

false

Type:

object

Diagram
Figure 78. Diagram of the attributes class
Table 34. Properties of the attributes class
Name Type Reference Description

boolean

array

#/definitions/boolean

A boolean.

num

array

#/definitions/num

A number.

text

array

#/definitions/text

A text.

vec

array

#/definitions/vec

A vector (list) of numbers or strings.

bbox

A 2D bounding box is defined as a 4-dimensional vector [x, y, w, h], where [x, y] is the center of the bounding box and [w, h] represent the width (horizontal, x-coordinate dimension) and height (vertical, y-coordinate dimension), respectively.

Additional properties:

true

Type:

object

Diagram
Figure 79. Diagram of the bbox class
Table 35. Properties of the bbox class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

The array of 4 values that define the [x, y, w, h] values of the bbox.

binary

A binary payload.

Additional properties:

true

Type:

object

Diagram
Figure 80. Diagram of the binary class
Table 36. Properties of the binary class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

data_type

string

true

This is a string that declares the type of the values of the binary object.

encoding

string

true

This is a string that declares the encoding type of the bytes for this binary payload, for example, "base64".

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

string

true

A string with the encoded bytes of this binary payload.

boolean

A boolean.

Additional properties:

true

Type:

object

Diagram
Figure 81. Diagram of the boolean class
Table 37. Properties of the boolean class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies how the boolean shall be considered. In this schema the only possible option is as a value.

val

boolean

true

The boolean value.

context

A context is a type of element which defines any nonspatial or temporal annotation. Contexts can be used to add richness to the contextual information of a scene, including location, weather, application-related information.

Additional properties:

false

Type:

object

Diagram
Figure 82. Diagram of the context class
Table 38. Properties of the context class
Name Type Required Reference Description

context_data

#/definitions/context_data

Additional data to describe attributes of the context.

context_data_pointers

#/definitions/element_data_pointers

This is a JSON object which contains OpenLABEL element data pointers. Element data pointer keys shall be the "name" of the element data this pointer points to.

frame_intervals

array

#/definitions/frame_interval

The array of frame intervals where this context exists or is defined.

name

string

true

Name of the context. It is a friendly name and not used for indexing.

ontology_uid

string

This is the UID of the ontology where the type of this context is defined.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

type

string

true

The type of a context defines the class the context corresponds to.

context_data

Additional data to describe attributes of the context.

Additional properties:

false

Type:

object

Diagram
Figure 83. Diagram of the context data class
Table 39. Properties of the context data class
Name Type Reference Description

boolean

array

#/definitions/boolean

List of "boolean" that describe this context.

num

array

#/definitions/num

List of "num" that describe this context.

text

array

#/definitions/text

List of "text" that describe this context.

vec

array

#/definitions/vec

List of "vec" that describe this context.

coordinate_system

A coordinate system is a 3D reference frame. Spatial information on objects and their properties can be defined with respect to coordinate systems.

Additional properties:

true

Diagram
Figure 84. Diagram of the coordinate system class
Table 40. Properties of the coordinate system class
Name Type Required Reference Description

children

array

List of children of this coordinate system.

parent

string

true

This is the string UID of the parent coordinate system this coordinate system is referring to.

pose_wrt_parent

#/definitions/transform_data

JSON object containing the transform data.

type

string

true

This is a string that describes the type of the coordinate system, for example, "local", "geo").

coordinate_systems

This is a JSON object which contains OpenLABEL coordinate systems. Coordinate system keys can be any string, for example, a friendly coordinate system name.

Additional properties:

false

Type:

object

Diagram
Figure 85. Diagram of the coordinate systems class
cuboid

A cuboid or 3D bounding box. It is defined by the position of its center, the rotation in 3D, and its dimensions.

Additional properties:

true

Type:

object

Diagram
Figure 86. Diagram of the cuboid class
Table 41. Properties of the cuboid class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

true

List of values encoding the position, rotation and dimensions. Two options are supported, using 9 or 10 values. If 9 values are used, the format is (x, y, z, rx, ry, rz, sx, sy, sz), where (x, y, z) encodes the position, (rx, ry, rz) encodes the Euler angles that encode the rotation, and (sx, sy, sz) are the dimensions of the cuboid in its object coordinate system. If 10 values are used, then the format is (x, y, z, qx, qy, qz, qw, sx, sy, sz) with the only difference of the rotation values which are the 4 values of a quaternion.

element_data_pointer

This item contains pointers to element data of elements, indexed by "name", and containing information about the element data type, for example, bounding box, cuboid, and the frame intervals in which this element_data exists within an element. That means, these pointers can be used to explore element data dynamic information within the JSON content.

Type:

object

Diagram
Figure 87. Diagram of the element data pointer class
Table 42. Properties of the element data pointer class
Name Type Required Reference Description

attribute_pointers

object

This is a JSON object which contains pointers to the attributes of the element data pointed by this pointer. The attributes pointer keys shall be the "name" of the attribute of the element data this pointer points to.

frame_intervals

array

true

#/definitions/frame_interval

List of frame intervals of the element data pointed by this pointer.

type

string

Type of the element data pointed by this pointer.

element_data_pointers

This is a JSON object which contains OpenLABEL element data pointers. Element data pointer keys shall be the "name" of the element data this pointer points to.

Additional properties:

false

Type:

object

Diagram
Figure 88. Diagram of the element data pointers class
event

An event is an instantaneous situation that happens without a temporal interval. Events complement actions providing a mechanism to specify triggers or to connect actions and objects with causality relations.

Additional properties:

false

Type:

object

Diagram
Figure 89. Diagram of the event class
Table 43. Properties of the event class
Name Type Required Reference Description

event_data

#/definitions/event_data

Additional data to describe attributes of the event.

event_data_pointers

#/definitions/element_data_pointers

This is a JSON object which contains OpenLABEL element data pointers. Element data pointer keys shall be the "name" of the element data this pointer points to.

frame_intervals

array

#/definitions/frame_interval

The array of frame intervals where this event exists or is defined. Note that events are thought to be instantaneous. That means, they are defined for a single frame interval where the starting and ending frames are the same.

name

string

true

Name of the event. It is a friendly name and not used for indexing.

ontology_uid

string

This is the UID of the ontology where the type of this event is defined.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

type

string

true

The type of an event defines the class the event corresponds to.

event_data

Additional data to describe attributes of the event.

Additional properties:

false

Type:

object

Diagram
Figure 90. Diagram of the event data class
Table 44. Properties of the event data class
Name Type Reference Description

boolean

array

#/definitions/boolean

List of "boolean" that describe this event.

num

array

#/definitions/num

List of "num" that describe this event.

text

array

#/definitions/text

List of "text" that describe this event.

vec

array

#/definitions/vec

List of "vec" that describe this event.

frame

A frame is a container of dynamic, timewise, information.

Additional properties:

false

Type:

object

Diagram
Figure 91. Diagram of the frame class
Table 45. Properties of the frame class
Name Type Additional properties Reference Description

actions

object

false

#/definitions/action_data

This is a JSON object that contains dynamic information on OpenLABEL actions. Action keys are strings containing numerical UIDs or 32 bytes UUIDs. Action values may contain an "action_data" JSON object.

contexts

object

false

#/definitions/context_data

This is a JSON object that contains dynamic information on OpenLABEL contexts. Context keys are strings containing numerical UIDs or 32 bytes UUIDs. Context values may contain a "context_data" JSON object.

events

object

false

#/definitions/event_data

This is a JSON object that contains dynamic information on OpenLABEL events. Event keys are strings containing numerical UIDs or 32 bytes UUIDs. Event values may contain an "event_data" JSON object.

frame_properties

object

true

#/definitions/stream

This is a JSON object which contains information about this frame.

objects

object

false

#/definitions/object_data

This is a JSON object that contains dynamic information on OpenLABEL objects. Object keys are strings containing numerical UIDs or 32 bytes UUIDs. Object values may contain an "object_data" JSON object.

relations

object

false

This is a JSON object that contains dynamic information of OpenLABEL relations. Relation keys are strings containing numerical UIDs or 32 bytes UUIDs. Relation values are empty. The presence of a key-value relation pair indicates the specified relation exists in this frame.

frame_interval

A frame interval defines a starting and ending frame number as a closed interval. That means the interval includes the limit frame numbers.

Additional properties:

false

Type:

object

Diagram
Figure 92. Diagram of the frame interval class
Table 46. Properties of the frame interval class
Name Type Description

frame_end

integer

Ending frame number of the interval.

frame_start

integer

Initial frame number of the interval.

image

An image.

Additional properties:

true

Type:

object

Diagram
Figure 93. Diagram of the image class
Table 47. Properties of the image class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

encoding

string

true

This is a string that declares the encoding type of the bytes for this image, for example, "base64".

mime_type

string

true

This is a string that declares the MIME (multipurpose internet mail extensions) of the image, for example, "image/gif".

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

string

true

A string with the encoded bytes of this image.

line_reference

A line reference is a JSON object which defines a 3D line segment by means of defining the indexes of its two extreme points.

Additional properties:

true

Type:

object

Diagram
Figure 94. Diagram of the line reference class
Table 48. Properties of the line reference class
Name Type Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

reference_type

string

This is the type of the reference as a string with the name of the element data (e.g. point3d)

val

array

The array of indexes of the references of type reference_type.

mat

A matrix.

Additional properties:

true

Type:

object

Diagram
Figure 95. Diagram of the mat class
Table 49. Properties of the mat class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

channels

number

true

Number of channels of the matrix.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

data_type

string

true

This is a string that declares the type of the numerical values of the matrix, for example, "float".

height

number

true

Height of the matrix. Expressed in number of rows.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

Flattened list of values of the matrix.

width

number

true

Width of the matrix. Expressed in number of columns.

mesh

A mesh encodes a point-line-area structure. It is intended to represent flat 3D meshes, such as several connected parking lots, where points, lines and areas composing the mesh are interrelated and can have their own properties.

Additional properties:

true

Type:

object

Diagram
Figure 96. Diagram of the mesh class
Table 50. Properties of the mesh class
Name Type Additional properties Reference Description

area_reference

object

false

#/definitions/area_reference

This is the JSON object for the areas defined for this mesh. Area keys are strings containing numerical UIDs.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

line_reference

object

false

#/definitions/line_reference

This is the JSON object for the 3D lines defined for this mesh. Line reference keys are strings containing numerical UIDs.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

point3d

object

false

#/definitions/point3d

This is the JSON object for the 3D points defined for this mesh. Point3d keys are strings containing numerical UIDs.

metadata

This JSON object contains information, that is, metadata, about the annotation file itself.

Additional properties:

true

Type:

object

Diagram
Figure 97. Diagram of the metadata class
Table 51. Properties of the metadata class
Name Type Required Description

annotator

string

Name or description of the annotator that created the annotations.

comment

string

Additional information or description about the annotation content.

file_version

string

Version number of the OpenLABEL annotation content.

name

string

Name of the OpenLABEL annotation content.

schema_version

string

true

Version number of the OpenLABEL schema this annotation JSON object follows.

tagged_file

string

File name or URI of the data file being tagged.

num

A number.

Additional properties:

true

Type:

object

Diagram
Figure 98. Diagram of the num class
Table 52. Properties of the num class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies whether the number shall be considered as a value, a minimum, or a maximum in its context.

val

number

true

The numerical value of the number.

object

An object is the main type of annotation element. Object is designed to represent spatiotemporal entities, such as physical objects in the real world. Objects shall have a name and type. Objects may have static and dynamic data. Objects are the only type of elements that may have geometric data, such as bounding boxes, cuboids, polylines, images, etc.

Additional properties:

false

Type:

object

Diagram
Figure 99. Diagram of the object class
Table 53. Properties of the object class
Name Type Required Reference Description

coordinate_system

string

This is the string key of the coordinate system this object is referenced with respect to.

frame_intervals

array

#/definitions/frame_interval

The array of frame intervals where this object exists or is defined.

name

string

true

Name of the object. It is a friendly name and not used for indexing.

object_data

#/definitions/object_data

Additional data to describe attributes of the object.

object_data_pointers

#/definitions/element_data_pointers

This is a JSON object which contains OpenLABEL element data pointers. Element data pointer keys shall be the "name" of the element data this pointer points to.

ontology_uid

string

This is the UID of the ontology where the type of this object is defined.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

type

string

true

The type of an object defines the class the object corresponds to.

object_data

Additional data to describe attributes of the object.

Additional properties:

false

Type:

object

Diagram
Figure 100. Diagram of the object data class
Table 54. Properties of the object data class
Name Type Reference Description

area_reference

array

#/definitions/area_reference

List of "area_reference" that describe this object.

bbox

array

#/definitions/bbox

List of "bbox" that describe this object.

binary

array

#/definitions/binary

List of "binary" that describe this object.

boolean

array

#/definitions/boolean

List of "boolean" that describe this object.

cuboid

array

#/definitions/cuboid

List of "cuboid" that describe this object.

image

array

#/definitions/image

List of "image" that describe this object.

line_reference

array

#/definitions/line_reference

List of "line_reference" that describe this object.

mat

array

#/definitions/mat

List of "mat" that describe this object.

mesh

array

#/definitions/mesh

List of "mesh" that describe this object.

num

array

#/definitions/num

List of "num" that describe this object.

point2d

array

#/definitions/point2d

List of "point2d" that describe this object.

point3d

array

#/definitions/point3d

List of "point3d" that describe this object.

poly2d

array

#/definitions/poly2d

List of "poly2d" that describe this object.

poly3d

array

#/definitions/poly3d

List of "poly3d" that describe this object.

rbbox

array

#/definitions/rbbox

List of "rbbox" that describe this object.

text

array

#/definitions/text

List of "text" that describe this object.

vec

array

#/definitions/vec

List of "vec" that describe this object.

ontologies

This is the JSON object of OpenLABEL ontologies. Ontology keys are strings containing numerical UIDs or 32 bytes UUIDs. Ontology values may be strings, for example, encoding a URI. JSON objects containing a URI string and optional lists of included and excluded terms.

Additional properties:

false

Type:

object

Diagram
Figure 101. Diagram of the ontologies class
openlabel

The OpenLABEL root JSON object, which contains all other JSON objects.

Additional properties:

false

Type:

object

Diagram
Figure 102. Diagram of the openlabel class
Table 55. Properties of the openlabel class
Name Type Required Additional properties Reference Description

actions

object

false

#/definitions/action

This is the JSON object of OpenLABEL actions. Action keys are strings containing numerical UIDs or 32 bytes UUIDs.

contexts

object

false

#/definitions/context

This is the JSON object of OpenLABEL contexts. Context keys are strings containing numerical UIDs or 32 bytes UUIDs.

coordinate_systems

#/definitions/coordinate_systems

This is a JSON object which contains OpenLABEL coordinate systems. Coordinate system keys can be any string, for example, a friendly coordinate system name.

events

object

false

#/definitions/event

This is the JSON object of OpenLABEL events. Event keys are strings containing numerical UIDs or 32 bytes UUIDs.

frame_intervals

array

#/definitions/frame_interval

This is an array of frame intervals.

frames

object

false

#/definitions/frame

This is the JSON object of frames that contain the dynamic, timewise, annotations. Keys are strings containing numerical frame identifiers, which are denoted as master frame numbers.

metadata

true

#/definitions/metadata

This JSON object contains information, that is, metadata, about the annotation file itself.

objects

object

false

#/definitions/object

This is the JSON object of OpenLABEL objects. Object keys are strings containing numerical UIDs or 32 bytes UUIDs.

ontologies

#/definitions/ontologies

This is the JSON object of OpenLABEL ontologies. Ontology keys are strings containing numerical UIDs or 32 bytes UUIDs. Ontology values may be strings, for example, encoding a URI. JSON objects containing a URI string and optional lists of included and excluded terms.

relations

object

false

#/definitions/relation

This is the JSON object of OpenLABEL relations. Relation keys are strings containing numerical UIDs or 32 bytes UUIDs.

resources

#/definitions/resources

This is the JSON object of OpenLABEL resources. Resource keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource values are strings that describe an external resource, for example, file name, URLs, that may be used to link data of the OpenLABEL annotation content with external existing content.

streams

#/definitions/streams

This is a JSON object which contains OpenLABEL streams. Stream keys can be any string, for example, a friendly stream name.

tags

object

false

#/definitions/tag

This is the JSON object of tags. Tag keys are strings containing numerical UIDs or 32 bytes UUIDs.

point2d

A 2D point.

Additional properties:

true

Type:

object

Diagram
Figure 103. Diagram of the point2d class
Table 56. Properties of the point2d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

id

integer

This is an integer identifier of the point in the context of a set of points.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

List of two coordinates to define the point, for example, x, y.

point3d

A 3D point.

Additional properties:

true

Type:

object

Diagram
Figure 104. Diagram of the point3d class
Table 57. Properties of the point3d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

id

integer

This is an integer identifier of the point in the context of a set of points.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

List of three coordinates to define the point, for example, x, y, z.

poly2d

A 2D polyline defined as a sequence of 2D points.

Additional properties:

true

Type:

object

Diagram
Figure 105. Diagram of the poly2d class
Table 58. Properties of the poly2d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

closed

boolean

true

A boolean that defines whether the polyline is closed or not. In case it is closed, it is assumed that the last point of the sequence is connected with the first one.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

hierarchy

array

Hierarchy of the 2D polyline in the context of a set of 2D polylines.

mode

string

true

Mode of the polyline list of values: "MODE_POLY2D_ABSOLUTE" determines that the poly2d list contains the sequence of (x, y) values of all points of the polyline. "MODE_POLY2D_RELATIVE" specifies that only the first point of the sequence is defined with its (x, y) values, while all the rest are defined relative to it. "MODE_POLY2D_SRF6DCC" specifies that SRF6DCC chain code method is used. "MODE_POLY2D_RS6FCC" specifies that the RS6FCC method is used.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

true

List of numerical values of the polyline, according to its mode.

poly3d

A 3D polyline defined as a sequence of 3D points.

Additional properties:

true

Type:

object

Diagram
Figure 106. Diagram of the poly3d class
Table 59. Properties of the poly3d class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

closed

boolean

true

A boolean that defines whether the polyline is closed or not. In case it is closed, it is assumed that the last point of the sequence is connected with the first one.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

List of numerical values of the polyline, according to its mode.

rbbox

A 2D rotated bounding box is defined as a 5-dimensional vector [x, y, w, h, alpha], where [x, y] is the center of the bounding box and [w, h] represent the width (horizontal, x-coordinate dimension) and height (vertical, y-coordinate dimension), respectively. The angle alpha, in radians, represents the rotation of the rotated bounding box, and is defined as a right-handed rotation, that is, positive from x to y axes, and with the origin of rotation placed at the center of the bounding box (that is, [x, y]).

Additional properties:

true

Type:

object

Diagram
Figure 107. Diagram of the rbbox class
Table 60. Properties of the rbbox class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

true

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

val

array

true

The array of 5 values that define the [x, y, w, h, alpha] values of the bbox.

rdf_agent

An RDF agent is either an RDF semantic object or subject.

Type:

object

Diagram
Figure 108. Diagram of the rdf agent class
Table 61. Properties of the rdf agent class
Name Type Description

type

string

The OpenLABEL type of element.

uid

string

The element UID this RDF agent refers to.

relation

A relation is a type of element which connects two or more other elements, for example, objects, actions, contexts, or events. RDF triples are used to structure the connection with one or more subjects, a predicate, and one or more semantic objects.

Additional properties:

false

Type:

object

Diagram
Figure 109. Diagram of the relation class
Table 62. Properties of the relation class
Name Type Required Reference Description

frame_intervals

array

#/definitions/frame_interval

The array of frame intervals where this relation exists or is defined.

name

string

true

Name of the relation. It is a friendly name and not used for indexing.

ontology_uid

string

This is the UID of the ontology where the type of this relation is defined.

rdf_objects

array

true

#/definitions/rdf_agent

This is the list of RDF semantic objects of this relation.

rdf_subjects

array

true

#/definitions/rdf_agent

This is the list of RDF semantic subjects of this relation.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

type

string

true

The type of a relation defines the class the predicated of the relation corresponds to.

resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

Type:

object

Diagram
Figure 110. Diagram of the resource uid class
resources

This is the JSON object of OpenLABEL resources. Resource keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource values are strings that describe an external resource, for example, file name, URLs, that may be used to link data of the OpenLABEL annotation content with external existing content.

Additional properties:

false

Type:

object

Diagram
Figure 111. Diagram of the resources class
stream

A stream describes the source of a data sequence, usually a sensor.

Additional properties:

false

Type:

object

Diagram
Figure 112. Diagram of the stream class
Table 63. Properties of the stream class
Name Type Reference Description

description

string

Description of the stream.

stream_properties

#/definitions/stream_properties

Additional properties of the stream.

type

string

A string encoding the type of the stream.

uri

string

A string encoding the URI, for example, a URL, or file name, for example, a video file name, the stream corresponds to.

stream_properties

Additional properties of the stream.

Additional properties:

true

Type:

object

Diagram
Figure 113. Diagram of the stream properties class
streams

This is a JSON object which contains OpenLABEL streams. Stream keys can be any string, for example, a friendly stream name.

Additional properties:

false

Type:

object

Diagram
Figure 114. Diagram of the streams class
tag

A tag is a special type of label that can be attached to any type of content, such as images, data containers, folders. In ASAM OpenLABEL the main purpose of a tag is to allow adding metadata to scenario descriptions.

Additional properties:

true

Type:

object

Diagram
Figure 115. Diagram of the tag class
Table 64. Properties of the tag class
Name Type Required Reference Description

ontology_uid

string

true

This is the UID of the ontology where the type of this tag is defined.

resource_uid

#/definitions/resource_uid

This is a JSON object that contains links to external resources. Resource_uid keys are strings containing numerical UIDs or 32 bytes UUIDs. Resource_uid values are strings describing the identifier of the element in the external resource.

tag_data

#/definitions/tag_data

Tag data can be a JSON object or a string which contains additional information about this tag.

type

string

true

The type of a tag defines the class the tag corresponds to.

tag_data

Tag data can be a JSON object or a string which contains additional information about this tag.

Diagram
Figure 116. Diagram of the tag data class
text

A text.

Additional properties:

true

Type:

object

Diagram
Figure 117. Diagram of the text class
Table 65. Properties of the text class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies how the text shall be considered. The only possible option is as a value.

val

string

true

The characters of the text.

transform

This is a JSON object with information about this transform.

Additional properties:

true

Type:

object

Diagram
Figure 118. Diagram of the transform class
Table 66. Properties of the transform class
Name Type Required Reference Description

dst

string

true

The string UID, that is, the name, of the destination coordinate system for geometric data converted with this transform.

src

string

true

The string UID, that is, the name, of the source coordinate system of geometrical data this transform converts.

transform_src_to_dst

true

#/definitions/transform_data

JSON object containing the transform data.

transform_data

JSON object containing the transform data.

Diagram
Figure 119. Diagram of the transform data class
vec

A vector (list) of numbers or strings.

Additional properties:

true

Type:

object

Diagram
Figure 120. Diagram of the vec class
Table 67. Properties of the vec class
Name Type Required Reference Description

attributes

#/definitions/attributes

Attributes is the alias of element data that can be nested inside geometric object data. For example, a certain bounding box can have attributes related to its score, visibility, etc. These values can be nested inside the bounding box as attributes.

coordinate_system

string

Name of the coordinate system in respect of which this object data is expressed.

name

string

This is a string encoding the name of this object data. It is used as index inside the corresponding object data pointers.

type

string

This attribute specifies whether the vector shall be considered as a descriptor of individual values or as a definition of a range.

val

array

true

The numerical values of the vector (list) of numbers.

10. List of figures

Figure Description

Figure 1

The relationship between GNSS time systems and UTC

Figure 2

Relevant concepts for data annotation

Figure 3

Multi-sensor data labeling concept

Figure 4

Multi-sensor data labeling example

Figure 5

Scenario tagging concept

Figure 6

Scenario tagging example

Figure 7

ASAM OpenLABEL high-level annotation structure

Figure 8

Diagram of the metadata class

Figure 9

Coordinate systems with heading, pitch, and roll

Figure 10

Vehicle coordinate system, ISO 8855

Figure 11

Example of a transform of a multi-sensor setup into a geospatial coordinate system

Figure 12

Example of a transform of a camera and GPS sensor setup into a geospatial coordinate system

Figure 13

Example of a transform of a camera setup into a odom coordinate system

Figure 14

ASAM OpenLABEL labeling structure

Figure 15

ASAM OpenLABEL frame structure

Figure 16

ASAM OpenLABEL attributes

Figure 17

ASAM OpenLABEL geometric attributes

Figure 18

Diagram of the frame class

Figure 19

Diagram of the frame interval class

Figure 20

Diagram of the element data pointers class

Figure 21

Diagram of the frame class

Figure 22

One stream

Figure 23

One stream (not coincident stream index and frame index)

Figure 24

One stream (with timestamps and other properties)

Figure 25

Several steams (same frequency and same start and indexes)

Figure 26

Several streams (same frequency and different start and indexes)

Figure 27

Several streams containing jitter

Figure 28

Several streams (same frequency and constant shift)

Figure 29

Several streams (different frequency)

Figure 30

Several streams (different frequency)

Figure 31

Diagram of the streams class

Figure 32

Diagram of the stream class

Figure 33

Diagram of the coordinate systems class

Figure 34

Diagram of the coordinate system class

Figure 35

Diagram of the transform class

Figure 36

Diagram of the transform data class

Figure 37

Diagram of the ontologies class

Figure 38

2D bounding box definition

Figure 39

Diagram of the bbox class

Figure 40

2D rotated bounding box definition

Figure 41

Diagram of the rbbox class

Figure 42

3D bounding box definition

Figure 43

Diagram of the cuboid class

Figure 44

Diagram of the poly3d class

Figure 45

Diagram of the mesh class

Figure 46

Diagram of the mat class

Figure 47

Diagram of the binary class

Figure 48

Diagram of the point2d class

Figure 49

Diagram of the point3d class

Figure 50

Diagram of the resources class

Figure 51

Example image

Figure 52

Example image with resulting bounding boxes

Figure 53

Example visualization of a cuboid in a point cloud view

Figure 54

3D point cloud bildstein_station1 [18]

Figure 55

3D point cloud segmentation bildstein_station1 [18]

Figure 56

Example of a PNG-colored image [19]

Figure 57

Example of an image with contrast enhanced [19]

Figure 58

Example of an original image used for semantic segmentation

Figure 59

Example of a semantic segmentation that is non instance-aware

Figure 60

Example of a semantic segmentation that is instance-aware

Figure 61

Example of a full scene segmentation that is non instance-aware

Figure 62

Example of a full scene segmentation that is instance-aware

Figure 63

Scenario tagging ontology

Figure 64

ASAM OpenLABEL tagging structure

Figure 65

Diagram of the openlabel class

Figure 66

ASAM OpenLABEL attributes

Figure 67

Diagram of the tag class

Figure 68

Diagram of the tag data class

Figure 69

Diagram of the ontologies class

Figure 70

Diagram of the boolean class

Figure 71

Diagram of the num class

Figure 72

Diagram of the text class

Figure 73

Diagram of the vec class

Figure 74

Crossroad scenario

Figure 75

Diagram of the action class

Figure 76

Diagram of the action data class

Figure 77

Diagram of the area reference class

Figure 78

Diagram of the attributes class

Figure 79

Diagram of the bbox class

Figure 80

Diagram of the binary class

Figure 81

Diagram of the boolean class

Figure 82

Diagram of the context class

Figure 83

Diagram of the context data class

Figure 84

Diagram of the coordinate system class

Figure 85

Diagram of the coordinate systems class

Figure 86

Diagram of the cuboid class

Figure 87

Diagram of the element data pointer class

Figure 88

Diagram of the element data pointers class

Figure 89

Diagram of the event class

Figure 90

Diagram of the event data class

Figure 91

Diagram of the frame class

Figure 92

Diagram of the frame interval class

Figure 93

Diagram of the image class

Figure 94

Diagram of the line reference class

Figure 95

Diagram of the mat class

Figure 96

Diagram of the mesh class

Figure 97

Diagram of the metadata class

Figure 98

Diagram of the num class

Figure 99

Diagram of the object class

Figure 100

Diagram of the object data class

Figure 101

Diagram of the ontologies class

Figure 102

Diagram of the openlabel class

Figure 103

Diagram of the point2d class

Figure 104

Diagram of the point3d class

Figure 105

Diagram of the poly2d class

Figure 106

Diagram of the poly3d class

Figure 107

Diagram of the rbbox class

Figure 108

Diagram of the rdf agent class

Figure 109

Diagram of the relation class

Figure 110

Diagram of the resource uid class

Figure 111

Diagram of the resources class

Figure 112

Diagram of the stream class

Figure 113

Diagram of the stream properties class

Figure 114

Diagram of the streams class

Figure 115

Diagram of the tag class

Figure 116

Diagram of the tag data class

Figure 117

Diagram of the text class

Figure 118

Diagram of the transform class

Figure 119

Diagram of the transform data class

Figure 120

Diagram of the vec class

11. List of tables

Table Description

Table 1

Units

Table 2

Date and time formats

Table 3

Rules for using modal verbs

Table 4

Typographical conventions

Table 5

Properties of the metadata class

Table 6

Properties of the frame class

Table 7

Properties of the frame interval class

Table 8

Properties of the frame class

Table 9

Properties of the stream class

Table 10

Properties of the coordinate system class

Table 11

Properties of the transform class

Table 12

Attributes of the 2D bounding box

Table 13

Properties of the bbox class

Table 14

Attributes of the 2D rotated bounding box

Table 15

Properties of the rbbox class

Table 16

Attributes of the 3D bounding box (cuboid) using quaternion

Table 17

Attributes of the 3D bounding box (cuboid) using Euler angles

Table 18

Properties of the cuboid class

Table 19

Properties of the poly3d class

Table 20

Properties of the mesh class

Table 21

Properties of the mat class

Table 22

Properties of the binary class

Table 23

Properties of the point2d class

Table 24

Properties of the point3d class

Table 25

Properties of the openlabel class

Table 26

Properties of the tag class

Table 27

Properties of the boolean class

Table 28

Properties of the num class

Table 29

Properties of the text class

Table 30

Properties of the vec class

Table 31

Properties of the action class

Table 32

Properties of the action data class

Table 33

Properties of the area reference class

Table 34

Properties of the attributes class

Table 35

Properties of the bbox class

Table 36

Properties of the binary class

Table 37

Properties of the boolean class

Table 38

Properties of the context class

Table 39

Properties of the context data class

Table 40

Properties of the coordinate system class

Table 41

Properties of the cuboid class

Table 42

Properties of the element data pointer class

Table 43

Properties of the event class

Table 44

Properties of the event data class

Table 45

Properties of the frame class

Table 46

Properties of the frame interval class

Table 47

Properties of the image class

Table 48

Properties of the line reference class

Table 49

Properties of the mat class

Table 50

Properties of the mesh class

Table 51

Properties of the metadata class

Table 52

Properties of the num class

Table 53

Properties of the object class

Table 54

Properties of the object data class

Table 55

Properties of the openlabel class

Table 56

Properties of the point2d class

Table 57

Properties of the point3d class

Table 58

Properties of the poly2d class

Table 59

Properties of the poly3d class

Table 60

Properties of the rbbox class

Table 61

Properties of the rdf agent class

Table 62

Properties of the relation class

Table 63

Properties of the stream class

Table 64

Properties of the tag class

Table 65

Properties of the text class

Table 66

Properties of the transform class

Table 67

Properties of the vec class

Bibliography

[1] Time References in GNSS. European Space Agency โ€“ ESAC, 2011.

[2] GPS specification. National Coordination Office for Space-Based Positioning, Navigation, and Timing, 2021.

[3] GLONASS Time and Ephemeris. The Pennsylvania State University, 2020.

[4] Timescales. Paul Schlyter, 2017.

[5] ISO 8601 Date and time format. International Organization for Standardization, 2019.

[6] ASAM OpenDRIVE 1.7.0. ASAM e. V., 2021.

[7] ASAM OpenSCENARIO 1.1.0. ASAM e. V., 2021.

[8] ASAM OpenSCENARIO 2.0.0. ASAM e. V., 2021.

[9] ASAM OpenXOntology 1.0.0. ASAM e. V., 2021.

[10] BSI PAS 1883:2020 Operational design domain (ODD) taxonomy for an automated driving system (ADS). Specification. The British Standards Institution, 2020.

[11] ISO 8855:2011 Road vehicles โ€” Vehicle dynamics and road-holding ability โ€” Vocabulary. International Organization for Standardization, 2011.

[12] SAE J3016 Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International, 2021.

[13] JSON Schema Draft-07. json-schema.org, 2020.

[14] Robot Operating System (ROS) documentation. Open Source Robotics Foundation, Inc., 2020.

[15] UTM Technical Specifications Document. Cooperative Surveillance of low flying drones, 2019.

[16] RFC 4122. The Internet Society, 2005.

[17] SciPy documentation scipy.spatial.transform.Rotation. scipy.org, 2021.

[19] Mapillary Vistas Dataset. Mapillary, 2021.

[20] W3C RDF 1.1 Turtle Terse RDF Triple Language. World Wide Web Consortium, 2014.

[21] W3C SPARQL 1.1 Query Language. World Wide Web Consortium, 2013.