A

Adaptive Scenario Execution
Foretify feature which automates the real-time adjustment of scenario parameters during simulation to ensure that the intended objectives are achieved, even when minor deviations or uncertainties occur. This approach minimizes invalid test runs, reduces manual intervention, and preserves scenario intent, improving the efficiency and reliability of large-scale autonomous vehicle validation.

Association for Standardization of Automation and Measuring Systems (ASAM)     
Key standards group for standards related to tools used in the development and V&V of Automated Driving Systems.

Automated Driving System (ADS) 
The hardware and software in a vehicle that together can perform all real-time driving tasks such as steering, acceleration, braking, and environment monitoring without human intervention, within defined operational limits. ADS typically refers to SAE Levels 3, 4, and 5, where the system takes full responsibility for dynamic vehicle control and response while engaged.

Autonomous Vehicle (AV)

A self-driving vehicle (SAE Level 4 or 5) capable of navigating and operating without human control by using a combination of sensors, cameras, radar, and artificial intelligence.

C

Checkers

OSC DSL entities which are special types of watchers designed to flag incorrect behaviors of the ADS or the scenario and produce issues (special types of intervals).

D

Device Under Test (DUT)    
Refers to the specific component being evaluated or assessed in a testing environment. Deprecated term replaced by System Under Test (SUT).

Domain Specific Language (DSL)
A specialized programming language tailored to a specific application domain, offering precise syntax and features that make it easier to express domain-specific requirements and solutions compared to general-purpose languages. OpenSCENARIO DSL is used to define and simulate complex traffic situations and scenarios for autonomous vehicle testing and development.

E

Ego

The vehicle which is running the autonomous SW stack and is typically the scope of the System Under Test.

Evaluation Scenarios          
OSC DSL scenarios which define the movements, multi-phase behaviors, and/or trajectories of one or more actors, along with environmental factors that an ADS may encounter. Evaluation Scenarios are used to: 1) Monitor, detect, and match scenarios, 2) Produce interval data for each matched scenario occurrence.

Evaluators    
OSC DSL entities which independently detect, monitor, and produce interval data for the situations that an ADS encounters in real-world driving logs or simulation tests to evaluate the performance and ODD test coverage of the ADS.

Expected Operational Conditions (EOC)  
The real-world environmental, geographical, and roadway factors that an autonomous driving system is reasonably likely to encounter during operation. EOC represents the actual conditions anticipated for the system, in contrast to the Operational Design Domain (ODD), which specifies the designed limits within which the system can safely operate.

G

Generation Scenarios         
OSC DSL scenarios (executable simulation scenarios) which define how to activate the movements, multi-phase behaviors, and/or trajectories of one or more actors, along with environmental factors, to challenge and test an ADS in simulation. There are 2 types of generation scenarios: abstract & smart replay.

I

Intervals
Data structures for storing, tracking, and visualizing interesting behaviors over slices of time. They have a start time, end time, and data attributes. Generation, Evaluation Scenarios and Evaluators produce intervals with the capability to add two types of additional data to their intervals: 1) cover(): Data used to measure testing completeness or training completeness based on the goals for the safe operation of the ADS in a specific Operational Design Domain (ODD), 2) record(): Data used for other analytics, mainly KPIs.

K

Key Performance Indicator (KPI)
Metrics (such as recall, precision, error rates, response times, coverage, etc.) that are monitored to gauge performance, quality, and effectiveness, and to set success criteria for projects and product features.

O

OpenStreetMap (OSM)
A collaborative, open-source project that creates and provides free, editable maps of the world. It is built by a community of mappers who contribute and maintain geographic data, such as roads, trails, buildings, and more, from sources like GPS devices, aerial imagery, and manual survey.

Operational Design Domain (ODD)
Operating conditions under which a given driving automation system or feature thereof is specifically designed to function, including, but not limited to, environmental, geographical, and time-of-day restrictions, and/or the requisite presence or absence of certain traffic or roadway characteristics.

R

Root Cause Analysis (RCA)
Systematic process used to identify the primary underlying cause of defects or issues within the software. By pinpointing the root cause, it enables developers to implement effective solutions to prevent the recurrence of similar problems.

S

Safety Performance Indicators (SPI)         
A measurable value used to assess the safety-related performance of an automated driving system. SPIs are metrics that indicate the likelihood of crash involvement or the presence of unsafe conditions, and are designed to serve as proxy or surrogate measures for actual crash risk.

Scenario-based Development Methodology (SDM)        
A methodology for streamlining the development of abstract scenarios using OpenSCENARIO DSL for test generation and evaluation. SDM is based on the best practices for developing modular, consistent, and reusable scenarios.

Simulation of Urban MObility (SUMO)      
Open-source, microscopic, multi-modal traffic simulation tool designed to simulate the movement of vehicles, pedestrians, and other traffic participants within complex road networks.

Software in the Loop (SIL)
Testing methodology in which software components of a system are executed in a simulated environment to verify their functionality, performance, and interactions before deployment on actual hardware. This approach allows for early identification of issues and evaluation of software behavior under diverse, repeatable scenarios, supporting safe and efficient development of complex systems.

System Under Test (SUT)
The system being tested and evaluated. The SUT can be just the planning-and-control part of the vehicle, or the whole vehicle, or multiple vehicles plus the remote-supervision and map-update facilities, and so on.

V

Validation of Scenarios
Quantitatively determines the degree to which the simulation accurately represents the real world for its intended use.

Vehicle
Used to describe the non-ego vehicle actors in a scenario. Some companies use the term “NPC”, but in Foretellix, we use the term “vehicle” for this purpose, e.g., the cut_in_vehicle is the vehicle cutting in front of the ego in a cut_in scenario. Vehicles categories include sedan, van, bus, box_truck, semi_trailer_truck, full_trailer_truck, motorcycle, and bicycle.

Verification of Scenarios
Verification ensures the simulation model complies with its requirements and specifications. It demonstrates the mathematical/physical correctness of models and the quality of the framework.

Vulnerable Road User (VRU)
Refers to a pedestrian or cyclist actor. Note: In the industry this often refers to any road user that is not in a cabin car, meaning it often includes motorcycles.

W

Watchers
OSC DSL entities which monitor simpler behaviors, actions, or events, along with environmental factors that an ADS may encounter. Watchers produce interval data and can be used in procedural code to influence simulation execution.

Subscribe to our newsletter