More than 350 years ago, Sir Isaac Newton formulated the theory of gravity and laws of motion. These include the relationships between variables such as speed, time, distance, and acceleration that natural motions universally adhere to. Many years later, ASAM (OpenSCENARIO®2.0.0), VVM (Pegasus family), and ISO 34501 introduced a scale of distinct scenario description levels to differentiate between the available scenario creation styles and technology generations. The scale is called “scenario abstractions levels.” In our ongoing discussions with users, we realize there is still some confusion about the levels and what is required to make them useful for users. In this blog, we define the abstraction levels, explain how to attain the productivity and robustness promised by both logical and abstract scenarios, and even acknowledge Isaac Newton’s contribution to impactful ADAS and ADS V&V projects ?.

What are the four levels of abstraction?

The abstraction levels define a spectrum of scenario descriptions and their related capabilities. Each level adds more scenario control and automation over the previous. While the first three levels of abstraction are formal and intended to be compiled by tools, functional description is geared toward people reading and understanding and thus may include free English, supporting images, and more. The definition of the four abstraction levels is straightforward.

  • Concrete scenarios allow only attribute value assignments
  • Logical scenarios allow assignments or attribute value selection according to distribution from fully specified ranges
  • Abstract scenarios allow any possible constraints, including ranges and distributions, cross attributes, high-level location specification, and timing constraints.
  • Functional scenarios – Geared for humans. Not a formal description; thus, not machine-readable.

If you know the level of abstraction basics, feel free to jump to the next section. If not, here are more details.

The four levels of abstraction are illustrated in the figure below:

Figure 1: ASAM and Pegasus levels of abstraction

With concrete scenarios, all scenario attributes must be carefully calculated and manually assigned by the user. For example, set car1’s speed to 10kph and start the change lane maneuver in a specific XYZ location. (See Figure 2 for a visual image of the cut-in scenario.) All attributes, including maps, vehicle speeds, distances, accelerations, and maneuver locations, must be specified in a concrete scenario. The laws of physics impose multiple dependencies between the scenario set of attributes, for example, the distance is the product of speed and time, or the average acceleration equals speed difference divided by time. Users must also select the proper location according to the scenario’s nature and attributes. For example, at least two lanes are needed for the cut-in scenario, and various speed settings require road segments of various lengths. Failing to provide a consistent set of attributes or locations will likely result in a meaningless scenario with a loss of human and machine resources.

Figure 2: the cut-in scenario – start behind on a side lane and finish ahead in the same lane

The next level of abstraction is the

logical scenario. As with the previous abstraction level, the user must specify a set of locations and attributes, but there is one significant addition – individual attributes can be left unassigned within a fully specified range. For example, instead of hardcoding the speed to 45kph, a speed range between 10 to 90 kph can be provided. The value selection can originate from aggregated real-life data distributions or be biased to force challenging conditions.   The assumption is that the tedious calculation involved with every concrete scenario will be eliminated, and scale will be reached. (More about this assumption below. ?)

The automotive validation journey is a challenging process. Equipped with the requirements, test plan, and gut feeling, there is a need to span spaces and look between the cracks for expected and unexpected bugs.  This is the main motivation for abstract scenarios.

Abstract scenarios allow total freedom for the user to express any desired dependency using any form of constraint, including ranges. Constraints can be:

  • Cross attribute constraints – e.g., the NPC’s speed should be slower than the location’s legal speed.
  • Abstract location constraints – e.g., create a cut-in near a junction. Note that the scenario does not force a specific junction but requires a junction nearby.
  • Timing and execution constraint – g., the change lane should take place with half a second-time headway. There are multiple ways to implement such a request, but the user just wants to capture the timing requirement in a constraint.
  • Control flow and a composition constraint – e.g., the scenario should combine the cut-in and oncoming

Abstract scenarios enable a goal-based approach in which the user can ask for any desired scenario (set the goal for the tool), trusting the tool to deliver multiple consistent results.

Functional scenarios are non-formal descriptions of the desired scenarios, including the goal, motivation, and intent. For example, a non-formal description in English may stipulate that to validate the Ego in cut-in scenarios, we should both force a maneuver from the Ego to avoid danger and ensure that the Ego does not get confused by the cut-in maneuvers. It may include images (such as figure 2 above) to illustrate the scenario to the test writer.

Why are users disappointed by logical scenarios? And what is required to fix that?

Logical scenarios are a significant improvement over concrete scenarios as they capture a desired scenario parameter space, but they are misleading to both users and vendors. Letting the tool select a speed between 10 to 90 kph requires the tool also to consistently adjust starting locations, other vehicle speeds, acceleration, maneuvers timing, and more. This is where Isaac Newton’s discoveries cannot be ignored – tools must be smart enough to propagate the speed selection to other attributes.

This misunderstanding is not a minor one. A project team’s hope of reaching the necessary large-scale test suite and replacing tedious manual effort falls apart due to this. Without a tool capable of proper inferences and propagation of values, logical scenarios may result in a massive number of meaningless scenarios.

Teams that plan to adopt optimization workflow (applying mathematical algorithms to gravitate towards risk) also get their share of disappointment. No automation has been available to take the optimizer value recommendations and create a consistent scenario around them. While some attributes, like time-of-day or road friction, might be independent of others, most scenario attributes are tightly connected and often in surprising ways.

The smart technology that makes the needed inferences is called a constraint solver; the industry now realizes that constraint solvers are critical for getting consistent logical scenarios or enabling optimization workflows. Even if the user does not specify constraints in his scenarios, the system needs to consider implicit constraints (like the laws of physics), which the constraint solver solves. By the way, this understanding is already proven to be mandatory in other industries, something that many of us at Foretellix have done for the last ~20 years.

What is the added value of abstract scenarios?

ASAM and VVM have abstract scenario levels for all the right reasons – abstract scenarios are game changers for scenario-based testing productivity and robustness.

An efficient testing workflow is not about throwing an endless number of scenarios, hoping they will expose bugs; it is a large-scale guided hunt following the test plan and intelligently exploring the implementation cracks. You need complete control over the generated scenarios (as well as other capabilities) to follow the test plan, and you need to combine randomness and the unknown. In your journey to span scenario space and find these bugs, you will constantly refine the scenarios and further direct them toward different areas of concern – abstract scenario controls are a required tool for the job, but it will still be hard work!

One of the most significant value propositions of abstract scenarios is the ability to aggregate generic V&V expertise within executable packages and to enable its use on multiple ODDs. Assume that you have defined a valuable concrete or logical cut-in scenario that challenges the sensitivity of an autonomous driving function. Such a scenario cannot be migrated to various project ODDs without manual adjustments. If it was designed for a sedan Ego, it might not be applicable for a truck with totally different dynamics. You need to manually adjust it to another map or move it from an urban to a highway setting. It must be modified for left-hand street driving if your ODD includes London. For development productivity, you need to combine scenarios modularly and have a technology that can resolve the new dependencies of the combined scenarios. For example, you need to be able to execute the cut-in and oncoming scenarios at the same time and have a tool that can find locations appropriate for both these scenarios. Modeling for adaptability, extensibility, and modularity are some of the basic mandatory capabilities that SW projects cannot do without. These capabilities are finally available for scenario-based testing in the form of abstract scenarios.

At the risk of being repetitive, abstract scenarios also depend on the availability of a constraint solver. The need here is evident because the ODD and scenario constraints are explicit and user-defined, unlike the implicit law of physics and vehicle dynamics constraints.

So which abstraction layer should I use while coding my scenarios?

All abstractions levels might be appropriate based on the requirement and use case. Nevertheless, here are a few essential guidelines.

  • The OSC2 scenario abstraction should be equivalent to or above the requirement specification abstraction. For example, if the requirement asks for a cut-in and lists all scenario attributes, a concrete scenario might be the solution. However, a better choice might be to create an abstract generic cut-in scenario and assign concrete values on top of it. This will be the most effective and productive way to develop and maintain your scenario catalog.
  • Be strategic and see beyond a specific project or ODD’s needs. With the OSC2 language and solver technology, you can create or adopt generic V&V packages. For example, it is highly desired not to hardcode scenarios to a specific map or a location. This means that pure logical scenarios should be discouraged.
  • Scenario abstraction is typically modified as the project progresses, and new realizations are formed. You may initially assume that a logical scenario with specific ranges is needed (ignoring the guideline above?) and later discover additional ODD needs (e.g., vehicle dynamics constraints or location constraints). Enhancing the scenario with such considerations will turn the logical scenario into an abstract one. This means that your toolchain should support all abstractions to maximize control and development efficiently.

Most of the industry acknowledges the need for a massive amount of intelligent, meaningful scenarios. The way to replace the tedious manual work is by defining scenarios in high-level terms and leveraging the constraint solver’s automation to produce known and unknown edge cases. Logical scenarios combined with a constraint solver provide a first level of abstraction. One source of users’ frustration is that tool vendors claim to support logical scenarios but do not provide the necessary solver technology. That being said, only abstract scenarios allow the refined control and reuse required for real-life project productivity and safety. If you wish to know more about abstraction or explore Foretellix’s unique scenario-creation technology, give us a shout at info@foretellix.com.

As usual, drive safe,

Sharon

On July 20th, 2022, the Association for Standardization of Automation and Measuring Systems (ASAM) released its latest standard language and domain model for scenario-based testing – ASAM OpenSCENARIO® 2.0.0 (OSC2.0). OSC2.0 offers abstraction and power features to advance industry best practices. Following users’ requests, ASAM also published its updated roadmap on the converged OSC (see Figure 1 below and this document ). This information is vital for teams that are in the process of setting up their V&V toolchain and methodology for the years to come. If you are in the process of planning your short- or long-term AV or ADAS development project, you might find this blog helpful, and we encourage you to keep reading.

Figure 1: ASAM updated roadmap for OSC

Short- and long-term planning of V&V projects

Have you ever tried to estimate the number of tests that should be accomplished with your Software-In-the-Loop (SIL) infrastructure to make autonomous driving systems safe*? The answer primarily depends on your ODD, system engineering requirements, the automated function SAE level, and several other factors. But how about a million tests per day?

If you think it is way too high, consider the following:

  • ASIC companies (e.g. Intel, Apple, Qualcomm) are using significantly more simulations a day to validate our smartphone devices. To put things in perspective, our smartphones have about twenty times less HW, significantly reduced scenario space, and lower cost of failure. Why should we assume lower test numbers in our industry?
  • Many leading automotive companies have already announced that they execute more than a million tests per day. Regardless of the number, you must execute many high-quality scenarios to be competitive in this industry.

Achieving a massive number of quality scenarios is a multi-year marathon. It requires strategized development, and proper planning is a prerequisite to the correct workflow and toolchain. Even if you initially need a smaller number of scenarios, this number multiplies, and development takes time. Developing or adopting scenarios that will remain reusable, applicable to multiple project ODDs, and valid in the future is vital. Let’s start with the implications of scale on your choice of language and why a standard language is the right decision for you.

* Foretellix has developed a calculator that allows users to get a data-driven answer for this crucial question, and we can advise you on how to set up your CICD flow accordingly.

Why Adopt a Standard Language?

A standard language is your safest path because multiple tool vendors will support your scenario investment. In the history of languages, a by-committee or de-facto standard language always emerges, and an entire eco-system appears around it. In our automotive industry, ASAM is the organization that carries the standardization torch for the benefit of all stakeholders. OSC1.x and now OSC2.0 are the highlights of ASAM’s essential deliverables.

A standard solution is good for users. Unlike proprietary languages that lock you into a specific solution, a standard language provides the freedom to select the best-in-class. Standards also fuel collaboration at the team and company level and allow sharing scenarios, requirements, and expertise with existing or future partners.

While some tool vendors may make their product sticky using proprietary languages, standards also deliver lots of value for vendors. Knowing your customers’ language facilitates building solutions that will serve significant industry segments and enable collaboration with other vendors. With that in mind, Foretellix opened its language –  Measurable Scenario Description Language (M-SDL) – and eventually offered it to ASAM as a basis for OSC2.0.

The good news is that many companies – both tool providers and  AV/ADAS developers- are already adopting a standard or searching their way out of proprietary languages.

Figure 2: Standard Language vs. Proprietary Language

Why Choose the OSC2.0 Standard Language?

As industries and our thinking evolve, programming languages also need to evolve. The autonomous driving industry is one of the fastest changing industries, and OSC2 capabilities and syntax were designed to enable this rapid change.

Here is a quick overview of the OSC2 technical advantages over OSC1.x:

  • A modern human-readable Domain Specific Language (DSL) that is concise and intuitive for those who are domain experts but not necessarily programmers or V&V experts.
  • Extra expressiveness with constraints that allows for defining abstract scenarios and enables the automation and scale needed for complex V&V tasks.
  • Features for setting coverage goals and measuring the testing progress. Managing millions of scenarios is a challenging task. Coverage helps you intelligently assign compute and human resources and prove to yourself and others (in a data-driven safety case) that the SUT is safe.
  • Ability to define success criteria and KPIs and capture requirements definition in an executable format. It minimizes the need for manual verdict analysis and debugging and is critical for reaching thoroughness and scale.
  • Advanced reuse capabilities such as object-oriented and aspect-oriented features to define modular, composable, reusable, and extendable V&V packages. It is critical for reducing development and maintenance efforts and enabling collaboration among a distributed development team.
  • Ability to define scenarios that are adjustable to any location or map.
  • Extendible domain model to make the solution applicable to any original feature your company develops.
  • Increased support for binding to external code/functions/methods (e.g. distribution functions, statistics)

ASAM’s recent roadmap re-enforces the previously published plan of making OSC2.x a superset of OSC1.x and converging on a single standard version based on the OSC2.0 DSL. Additionally, as part of the ASAM implementers forum, multiple vendors have already announced various levels of OSC2 support. It means that sooner or later, the existing OSC2.x will be the only viable standard option and the best one moving forward.

Personally, collaborating with many bright and passionate individuals as part of the ASAM OSC2 project, I can testify that we have tried to maintain consistency with OSC1. We also created an entire chapter about migrating OSC1 to OSC2. Nevertheless, there was no way to deliver the required capabilities on top of OSC1 in a backward compatible fashion, and while both languages are called OpenScenario and offer overlapping capabilities, these are indeed two different languages.

If you wish to learn more and further understand each item above, you can find more information here.

What methodology should I use with OSC2?

If you give a V&V person a whiteboard and ask him to draw his V&V workflow, the result will be something along the lines of figure 3.

Figure 3: The traditional V&V high-level tasks that OSC2 automates

The same figure will likely be created whether the V&V person struggles with dozens of scenarios or successfully orchestrates millions of them. Selecting a methodology that scales appropriately requires a more refined analysis of both the automation enablers and methodology parts. Coupled with the right enabling technologiesOSC2 can boost the scalability of every step of the workflow. At ASAM, we used this guiding principle of not forcing one specific methodology on OSC2. As a result, you can adopt OSC2, enjoy the readability features, but still use it with low abstraction scenarios that will inherently limit your V&V scalability. On the flip side, OSC2 enables a powerful workflow that makes it possible to scale up your project as needed. Foretellix’s OSC2-based methodology answers questions such as:

  • How to build generic scenarios that are modular and reusable? When is the right time to move from concrete vs. abstract?
  • How to create ODD agnostic V&V packages that can be used in all phases of your automotive project?
  • How to structure the V&V plan and correlate it to system engineering requirements and other V&V risk dimensions?
  • How to layer constraints effectively to steer scenario creation and not be restricted to (minor modifications)?
  • How to create a tailored test suite optimized for a specific stack release or your recent stack changes?

In summary, selecting the correct methodology is a critical ingredient of any successful automotive project, and it needs to cover high-level concepts as well as practical operation guidance with careful attention to detail.

When should I move to the new standard?

Once you decide to move to OSC2, do so without delay as it prevents producing more legacy code that further complicates the migration while enabling you to enjoy the automation and robustness sooner.

As mentioned above, multiple tool vendors offer different levels of OSC2 support. Please check out our state-of-the-art solution?. Foretellix has been developing native engines for the OSC2 paradigm for the past few years, successfully applying them to projects and shaping the methodology to maximize productivity and safety. We also provide a unique value proposition. Foretellix has decided not to develop test execution platforms (i.e., we do not provide HIL or simulator products). We add an OSC2 support layer to any commercial or homegrown execution platform so that our users can leverage any simulator for its strengths and maintain the freedom to choose their test execution platforms in the future.

What’s the safest and most efficient way to move to the OSC2 standard?

The answer depends on the source scenarios’ starting condition and the migration motivation. This topic is too wide to be covered in what was initially planned to be a short blog, but here are a few high-level steps:

  • Step 1: Estimate the value of existing scenarios. This step’s most challenging part is to assess your homegrown scenarios objectively. Time invested does not necessarily equal a high-value scenario, nor is it about talent – it is mostly about tools; you may be a talented runner and still lose the race competing against a couch potato driving a car. Conversely, a concrete scenario may capture a rare edge case that you wish to maintain in your test-suite execution moving forward. There are ways to apply an OSC2 coverage model to non-OSC2 scenarios and objectively assess their aggregated contribution.
  • Step 2: Decide which scenarios to convert. If the motivation for moving to OSC2 was automation and robustness rather than simulator replacement (most of our users are in that category), you may decide to initially keep executing the legacy scenarios and, over time, migrate these to the standard input format.
  • Step 3: Convert scenarios. There are two options here: convert line-by-line or refactor the code.
    • With line-by-line migration, you take every statement in the source language and migrate it to OSC2. If your source language is OSC1.x, the migration process may include an automatic conversion solution that can do much of the heavy lifting. The problem with this conversion is that the resulting abstraction level mirrors the abstraction of the source, meaning that much of the OSC2 potential is not utilized. Logical and concrete scenarios contain both intent and specific test implementation. For example, a concrete location may be selected by the test writer but only characterized at a high level in the requirement. The resulting scenario will be limited to the test writer’s hard-coded implementation and cannot be composed or modified in other ways.
    • Code refactoring – At this level, you analyze the source scenarios and create reusable abstract OSC2 scenarios, thus identifying the hidden intent and stripping away implementation details. We have seen hundreds of OSC1 scenarios easily refactored to a handful of OSC2 abstract scenarios that cover much less space. In some of the migrations, scenario input formats also contained tool settings. It is a bad practice that should be eliminated to keep the scenario generic.

Summary

The recent OSC2  is a considerable step forward, providing the level of abstraction and the capabilities needed to address the industry’s growing complexity. Along with the release, ASAM clarifies the standard future direction of making OSC2.x the future solution. If you wish to learn more about how these can be used today or want to explore migration alternatives, please contact us at info@foretellix.com.

Drive safe,
Sharon

Our first two OSC2 automation expert seminars were extremely successful, with hundreds of attendees and a lively exchange of information both in the Q&A sessions and on chat. We would like to thank all attendees. We hope that this was informative and valuable for you and appreciate the positive feedback that we have received so far. Our goal with these webinars is to visit key topics and jointly find solutions for the benefit of the industry.

Many users requested a transcript of the webinar Q&A segment, and this blog post is dedicated to that. Note that it is more informative than our typical blogs. It ranges from conceptual, methodology, and standard implementation questions, but you can choose to read all of it or jump to the specific answers.

Feel free to bring up other topics of interest in the comments below or send an email to info@foretellix.com. We will do our best to respond to any requests within the next couple of sessions and maybe even hold full sessions around the deeper or more interesting topics.

We have already published the date of our next webinar on the much-requested topic of requirements-based testing and how to efficiently deploy this with an automated V&V process. The webinar will take place on the 24th of March 5pm CET. You can register to join and ask questions.

Questions:

Is there also the intent to be used to create or validate datasets for machine learning technologies?

Before addressing the question, it is important to clarify that the OpenSCENARIO 2.0 is a standard input format, and the ability to capture scenario intent can fuel multiple tools and purposes. The standard itself neither limits nor dictates a specific purpose, flow, or tool. In regard to training, we agree with the question’s suggested direction. The steerable random flow allows users to create multiple surprising scenarios that can be used for both training and validation purposes. There is much more to be said on the training flow and the mix of surprises with real-life probabilities.

How can a random scenario be generated without defined constraints?

As the question suggests, for a lot of practical applications you need to use constraints. Logical scenarios allow random selection (aka Monte-Carlo) within ranges. These ranges define a space of scenarios to be tested but this isn’t enough for most practical purposes. Attributes such as speed, distance, time, acceleration, and even the road, are tightly correlated with implied physical constraints. For example, driving at a certain speed for a certain amount of time determines the distance traveled and requires a road of sufficient length. Constraints also provide abstraction. They allow capturing a high-level intent in a formal description. This is necessary for technology to replace the tedious cognitive work by humans that is currently the norm. For example, the tool will select a proper segment for a cut-in scenario, adjusting the scenario’s physical and intent considerations accordingly.

Are the road and intersection an “actor”?

Roads and intersections are structs composed of multiple segments. See the domain model subsection in the LRM.

Are there standards for reporting logs of each run to understand failure states and pre and post-actor states?

We agree that this is needed. In general, the more you can standardize the better, but the creation of 2.0 was a huge cooperative effort. One reason this version of OpenSCENARIO is called 2.0, is that we will eventually have more than just two versions. There was a lot of excellent work done to standardize things, yet some solid ideas just couldn’t fit into our schedule for 2.0. For instance, there are basic methods of reporting errors and warnings as well as more sophisticated controls that were postponed to a future release. Foretellix has standards that we will be happy to contribute to subsequent versions, but we’ll see how that pans out.

“Lane change” is a good concept in environments that provide lanes. But what about environments with mere “free space traffic” (think of Indian roads with lots of single-track vehicles)?

The OSC2.0 domain model includes many actions and modifiers in addition to lanes, such as movable objects that can move in free space, and a car that can drive along paths. See the LRM for more details.

How to know the CDV does not have a major hole?  How do you know you have done enough to “verify”?

There is no single solution that ensures 100% full coverage of the entire scenario space. Coverage Driven Verification allows you to simply and effectively define and achieve goals. The use of automation enables scale and measurability and reduces room for human errors. As such, it is more effective than the manual approach which often suffers from major holes.  At the same time, there are important points to make about CDV and the possibility of missing holes:

  • The use of random generation may expose unknown, overlooked conditions. The combination of this and generic KPIs identifies holes in your original plan.
  • The abstraction and ability to create reusable V&V packages allow aggregating expertise and interesting scenarios. Over time, your reusable scenarios become a thorough asset to rely on.
  • The coverage goals can be automatically or manually extracted from multiple complementary resources and execution platforms. Optimizers can take these as starting points and steer the automatic test creation to visit risky areas.

CDV enables an automated flow full of checks and balances to identify coverage holes or missing requirements.

How do you calculate the coverage metrics?

OSC2.0 coverage features facilitate organizing infinity with a gradable coverage model. As a basis, each coverage item is calculated according to its collected type expressions:

  • For Boolean expressions (for example, was the siren on) – both true and false need to be observed for 100% coverage, so observing just one of the values is 50% coverage (one out of two). No observed value means 0% coverage.
  • For enumerated types (for example, was the vehicle category truck, sedan, or van), the number of observed values divided by the number of possible values is the verification grade. For example, if an enum type has three possible values and two of these were observed in a test suite, the verification grade is 2/3, or 66 percent.
  • For continuous values such as speed, which is an unsigned integer, the user can create buckets of values and count the hits in each bucket. For example, a speed of 0 to 120kph can be split into four buckets: 0..30, 31..60, 61..90, 91..120. By default, one hit in a bucket is sufficient to declare the bucket to be covered.
  • The overall average of all the calculated item grades is the overall coverage grade.

OSC2 lets you further shape the grades by ignoring values that are not interesting (for example, I do not care about trucks in my project), setting a different target for some values (for example, I want to try at least 10 cut-ins from the right and 5 are enough from the left), and more. Users may build different dashboards and analytics tools around these language concepts.

How do you handle the issue “curse of dimensionality”. If I have 100 parameters, will the platform test all combinations?

This is the core issue of taming infinity. Scenarios have parameters, each with a certain range of values. Also, because you can always mix scenarios, each with multiple parameters, the possibilities will never end, right? Even thinking about that makes your head spin. And one of the things we try and show — and I think you got at least a first glimpse of it — is that there are ways to use coverage definitions and related technology to make this a kind of optimization problem in the following sense: if you just have one night and, say, a hundred machines, what is the best way to use those resources to collect a first pass coverage?

If you have a weekend and a thousand machines, then here is your next step, but you’re going to cover all your scenarios, breadth-first at least to some degree. Then if you have more time, you can collect more and more coverage. And obviously, you will never do all the combinations. In fact, even with just a single speed parameter, you will never get to all the combinations. So technology and methodology are required to make a practical solution.

So as I think the writer of this question already implied, when you define your coverage, you should be thinking through what are the dangerous things and what are the various interactions. In a sense, a coverage model should be a map of your fears. And so if you are assuming that in this ODD there will be multiple car types, and there will be trucks, busses, and so on, then you had better define a coverage model that goes through them all. And if you think there will be snow and rain as well as nice days and fog, you had better go through all of these and then make sure that you cross them all and combine them all. And if you don’t have the assets, or if there’s a specific simulator you’re connected to that does not support snow, for instance, then you should do something about it, such as using a better simulator.

In your demo showing the test scores, ranks, etc., could you explain again what you are optimizing in terms of “ranks?”

Ranking is one of the optimization capabilities that Foretellix built on top of the OSC2 standard input format. Specifically, ranking optimization helps you create an optimized test suite for the entire V&V effort or for selected V&V needs.

The fact is that executed scenarios cover their own specified intent but might also cover much more in a by-the-way manner. For example, while doing a cut-in in a random location I may also drive under a bridge. Ranking can take a test suite with 500 tests and identify 50 tests that produce the same coverage result.  In subsequent test regressions, I may run 50 scenarios that have minimal redundancy, shorten the test suite execution time, and save the expensive simulation cycles for other goals.

Can your tool be also used for parameter identification through its optimization capabilities? Is there a predefined workflow for that?

The answer is yes. Please note that these webinars are really about the approach.  CDV efficiently spans the desired space and identifies unknown scenarios that can later be sent to a KPI optimizer. Specifically, Foretellix technology uses machine learning algorithms to identify feature importance, which can help both debug and efficient scenario exploration.

Do you consider the probability of scenarios when selecting them or when evaluating the performance of the AV in the report?

Yes. By default, unified distribution is applied for value selections – this means that all legal values have the same unbiased chance of being selected. You can use OSC2 default constraints to capture the desired probabilities for value selection. As the name suggests, default constraints apply default values. If a test constraint further steers the needed distribution (for example, for an edge case scenario), these constraints are ignored. Note that in edge case scenarios, even though one behavior was pushed to the extreme, it is important that other behaviors are normal to avoid a scenario that is just too crazy. Note that in some cases where statistical reports are needed, a joint distribution needs to be applied.

How do you define “undesirable” results such as a collision?

OSC2 provides success criteria mechanisms to enable self-checking tests. Such mechanisms are critical to enabling scale. Two main checker categories are:

  • Generic checkers – Applied for every execution regardless of the executed scenarios (for example, don’t collide with other actors)
  • Scenario specific checks – Applied for a specific narrow scenario context (for example, in the cut-in scenario end of change lane phase, allow a maximum of 1s TTC).

Are these sequences of behaviors open loop? e.g. in the lane overtaking sequence, could you randomly generate a sequence where the actor sideswipes the AV? How would you isolate those scenarios where a collision is due to an overly aggressive actor rather than the AV’s failure? Since you’re generating a huge volume of scenarios, manual triage isn’t an option.

This is exactly true. When you go from the thousands of runs per night to the hundred thousand runs per night, you can no longer think about manual checking of things. Not every accident and not every collision is necessarily a SUT error. If you create a challenging scenario where, for example, 50% of the instances of that scenario are supposed to end in a collision, then such collisions are not obviously all errors of the SUT. You need a pretty sophisticated way to define what is clearly an error,  what is clearly okay, and what are the gray areas that you may want to check later? And it is a combination of technology, methodology, and even art.

You also need a standard format for errors, and you need tools that help you cluster the errors, help you analyze them. We didn’t get to show that, but I think that’s an interesting thing to see.

If the randomly picked parameters generate a combination that is not physically possible for a vehicle at runtime, how is it handled?

The constraints include both scenario constraints, such as vehicle category, and implied physical constraints, such as those that connect distance to speed, to acceleration, and so on. An OSC2 tool should adjust the scenario to meet both your scenario and implied physical constraints, or it should report an error if no such scenario can be achieved. Many times, the constraint solver may report constraint contradictions even before simulation starts. In the case of partially controlled actors (for example, smart actors driven by behavioral models) a runtime contradiction may be reported. As was said previously, Foretellix tools load the physical constraints by default, so users can focus on the abstract scenario constraints.

Is there an open parser/interpreter to validate a scenario?

One of the advantages of a standard is that the multiple vendors involved will push the industry forward. PMSF suggests a parser and CLI for OSC2.0 syntax checking. For more information, please click here.

How should we set the scenario parameter range?

Scenario parameter ranges are just another type of constraint. As was discussed, abstract scenarios can include any constraint kind to capture dependencies. A parameter range can be between any two expressions: not just a literally specified range such as keep(speed in [10.100]KPH)  but also keep(speed in [attr_a..attr_b]). Note also that the final calculated values will be a resolution of the range and the context scenario. For example, if a truck is used as part of the cut-in scenario, the eventually randomized vehicle speeds will conform to both the range requirement and the truck speed capabilities.

Are the trajectories generated kinematically feasible by default? If not, is there a way to encode such constraints?

By default, generated scenarios and dynamics are both feasible and physically possible. Note that virtual platforms may benefit from running scenarios that are not physically possible. (For example, you can save simulation time by allowing infinite acceleration.) The OSC2 LRM and domain model allow executing actions that are not physically possible, as well as those that are. There are methodology guidelines on how to write reusable scenarios that are portable between virtual and physical platforms.

How do you create edge-case scenarios?

The definition of edge-case scenarios can be wide. It could be non-typical parameter values, the desired traffic density, error injections, road conditions, a specific mix of scenarios, and more. There are efficient ways to come up with the right set of edge-cases in the planning process. Once the plan is set, OSC2 facilitates translating the functional goals into abstract scenarios and a coverage model to efficiently meet these goals.

Is there a reference implementation for tools implementing the format?

Not at this point. In terms of effort, it would be extremely challenging to create a reference implementation. Also, past experience has shown that a reference implementation prevents industry progress by forcing a specific implementation scheme on vendors so that, for example, constraint solvers did not progress.

As usual, drive safe,
Sharon

It has the potential to revolutionize our daily routines, our social life, and even the demographic structure of our cities. Between current reality and this utopian future stands the barrier of an infinite scenario space versus a finite set of resources. Unfortunately, by using mostly manual and siloed solutions we can’t cross this barrier… but now (soon?) comes OpenSCENARIO 2.0.

What is OpenSCENARIO2.0?

ASAM (Association for Standardization of Automation and Measuring Systems) is a non-profit organization that promotes standardization in automotive development and testing.

For example, ASAM OpenDRIVE is a standard format to provide a common base for describing road networks with extensible markup language (XML) syntax. OpenSCENARIO 2.0 (OSC2.0) is the upcoming ASAM standard language to develop, verify, and validate the safety and efficiency of automated driving systems. It offers a modern syntax resulting in a concise and readable format.

What makes OpenSCENARIO 2.0 so powerful?

Since OSC2.0 is not yet released, we can’t show code snippets, but while some great decisions were made around the language syntax, the key OSC2.0 revolution is the ability to capture scenario intent in a machine-readable formal way. The language allows capturing dependencies over time between actors (such as vehicles and persons) and their behaviors, using mathematical constraints. This means that OSC2.0 scenarios can be processed by an intelligent machine to create tests that would require an army of human test writers to manually. I’ve collected a few examples of this efficiency below.

Automation – OSC2.0 makes it possible to teach an intelligent machine abstract maneuvers and locations and then ask it to produce and execute test scenarios intelligently and repeatedly according to this acquired knowledge. For example, using constraints, I can teach a machine what is a cut-in scenario (start behind on the side of the Ego and finish ahead in the same lane) and request the machine to perform it in various locations, with various speeds and distances, etc. The machine can automate scenario creation interacting with both physical (e.g. HIL) and virtual (e.g. SIL) testing platforms. It can auto-adjust the scenario to any desired location and speed or intelligently mix it with other scenarios. Inferences and calculations between the scenario phases and parameters that in the past required human intervention using a trial-and-error process, can now be achieved by a machine. This allows the generation of millions of meaningful and valid scenarios from one abstract scenario.

Figure 1: Leveraging an intelligent machine for automation

With proficiency and speed, intelligent machines can reach a high scale of meaningful and fine-controlled scenarios more quickly than by any other means. The machine can even adjust the progression of the scenario to the unpredictable behavior of the autonomous vehicle.

Measurability – OSC2.0 allows users to accurately describe the testing goals for the intelligent machine to work towards. The Verification and Validation (V&V) coverage goals can come from real-life statistics, ODD requirements, risk calculations, or engineering judgment. The machine digests the user-specified goals, tries to meet all requests, and produces an accurate report of achieved and non-achieved coverage goals. Setting goals and being able to measure them at scale is key in order to deal with the infinite test space.

Optimization – Note that since the intelligent machine understands the scenario intent as well as the ODD or project goals, it can create a test regression optimized for specific needs and avoid executing already explored areas. Trying to manually analyze hundreds of thousands of scenarios to understand which ones are relevant or redundant for a specific requirement is extremely time-consuming for a human being, but can be easily automated by a machine. For example, I can ask the machine to create a regression that focuses on ACC automation functions and avoids running redundant, irrelevant scenarios.

Figure 2: Measurability and optimization

Finding unknown scenarios – The challenge of ensuring that your autonomous driving system has been adequately tested in the unknown scenario space of the ODD (e.g. as required by SOTIF) is yet another place where OSC2.0 can help. If the machine understands and randomizes a cut-in independently, it may explore multiple conditions that the user may have overlooked. Even if I specify a few scenario attributes or mix two known scenarios, the rest of the scenario is randomly generated, so there is a surprise element in every test. Note that OSC2.0 semantics enable true randomization of the scenario and not shallow fuzzing of parameters that are easy to vary — there are dependencies between speed and distance, acceleration and speed, and the combination of these may require a totally different road segment. The real value is resolving the interdependencies to randomize distances, latencies, and locations, and having the machine plan a scenario around the user goals.  The user can always select between real-life distributions or edge-case scenarios to explore risk dimensions.

Figure 3: Finding the unknowns

The video below shows an execution that was randomized from a merge into a highway abstract scenario. In many previous randomized scenarios, the Ego was able to successfully merge onto the highway. However, this video demonstrates a failure to merge. We need to analyze the reasons for the Ego’s poor decision to drive off the road, but by generating the scenario with randomized location, distances, speed, and other parameters, a potential bug has been exposed.

A standard language across test execution platforms   – There is a big overlap between the scenarios that we want to execute on SIL, HIL, test tracks, street driving, and more. The various simulators available today offer different strengths such as better perception vs. better vehicle dynamics capabilities; therefore use of multiple simulators is required. OSC2.0 allows porting the same scenarios and correlating the result across all platforms, bridging between the different teams while leveraging the strength of each testing platform. Using a simulator proprietary language locks the user into a specific simulator, which may result in compromising on quality or a painful duplicated effort. Having the entire industry support and develop technologies around OSC2.0 is the right formula to globally move the industry forward.

Reuse and ODD agnostic ready-made content – The digital transformation caught the automotive industry unprepared. Basic software capabilities such as type inheritance, extensibility, and modularity were not part of the scenario creation jargon. For example, I can define a vehicle’s category to be a truck or a sedan. If a truck was randomized or requested in a scenario, the vehicle will automatically have a truck’s dimensions and dynamics. Such fundamental software capabilities further minimize the amount of OSC2.0 code that you need to write and maintain. If you add these software practices to the location and ODD agnostic capabilities of OSC2.0, then the industry finally has the necessary capabilities to create truly reusable, OSC2.0 compliant test suites.

To summarize, the OSC2.0 revolution—the automation, measurability, optimization, standard language across test execution platforms, reuse, and ready-made content—will revolutionize industry efficiency and thoroughness. Solutions that provide the value described above already exist today, and we will experience more solutions that can leverage the unique capability of OSC2.0 to capture scenario intent.

Now that we reviewed the value of OSC2.0, let’s discuss a little bit the past, present, and future of the OSC2.0 standard.

Where did open scenario 2.0 come from?

The figure below illustrates the M-SDL to OpenSCENARIO 2.0 journey.

Figure 4: Open M-SDL to OpenSCENARIO 2.0

The ability to capture scenario intent via constraints, set measurable coverage goals, perform optimization and enable SW capabilities, were introduced by Foretellix in January 2017 as part of the open M-SDL language. As strong believers in standards, we opened the language to the public domain and allowed hundreds of companies to download it. Eventually, we contributed our open M-SDL syntax and concepts to the public domain and to the ASAM organization and invested a lot of effort, jointly with multiple dedicated partners to develop OpenSCENARIO 2.0.

Will Foretellix continue its contribution to OSC2?

Foretellix is committed to continuing to contribute to the evolution of OpenSCENARIO 2 and its future versions for the benefit of the industry. In parallel, we will continue to serve the needs of our user base, pushing both the technology and the language forward.

When will OSC2.0 intelligent machines be available?

At least nine vendors have already announced their plans to support the upcoming OSC2.0 language syntax. Over the past several years, Foretellix has developed native engines within the Foretify intelligent machine that are necessary to fully realize OSC2.0’s potential. Our technology has been used at leading OEM and Tier1 companies for years, to develop and test AD and ADAS functions. The technology is always accompanied by expert V&V consulting on automation, methodology, and the overall virtualization process.


Register to the OSC2.0 Webinar

If you wish to know more, register for our ‘Taming Infinity with OpenSCENARIO 2.0‘ webinar. This is the first out of a series of webinars led by our experts, touching on various OpenSCENARIO 2.0 language features. The series will build your knowledge starting from scenario coding guidelines all the way to methodology and flows. We will also review the intelligent machine and its various engines, such as the constraint solver and optimizer.

If you wish to check out Foretellix’s OSC2.0 automated flows and solutions, drop us an email at info@foretellix.com.

BTW, we are hiring. If you are interested in an exciting role, joining a winning team that is building the next generation of AD and ADAS test automation solutions, click here.

As usual, drive safe,
Sharon

At Foretellix, we continue evolving M-SDL to meet industry needs, with the intention of converging Open M-SDL with OpenSCENARIO 2.0. As in previous versions, updates were made in response to customer needs, partner feedback, and inputs we get through our participation in ASAM’s OpenSCENARIO 2.0 development project. We want to thank them all.

This feedback allows us to enhance the language to address critical user needs in a timely manner. M-SDL version 20.10 includes several important features introduced over the past few months:

  • Refined control over the randomly selected scenario locations with new road elements.
  • Directed and explicit control of movements with arbitrarily shaped movements.
  • Ability to use the same scenario for both left and right-hand side traffic flow to avoid duplicating scenario creation and maintenance efforts.

If you wish to be updated or to contribute to the development of M-SDL, please join ASAM and the M-SDL community.

Road elements

AV and ADAS verification may require driving on a specific road type or road segment with specific properties. These may be available on either basic OpenDrive maps or only on complex proprietary maps.
On the other hand, unlike very directed tests, abstract scenario creation requires you to be able to describe these properties in an abstract manner so that all relevant places on the map can be used for that scenario. To meet these requirements, M-SDL 20.10 includes a new struct type: road_element.

A road element represents a road segment that has specific parameters. These parameters could be physical properties of the road itself (such as the number of lanes or the lanes’ width) or be related to the surrounding environment (e.g. a road on a cliff or a tree-lined road). The movement of an actor takes place on a sequence of road elements.

An M-SDL library of road elements may include several basic elements such as a generic road, a town junction, a highway, and so on. Users can extend the library’s set of road elements to include elements with features specific to their verification needs.

For example, assuming that the library has elements that describe a junction, a vehicle can follow a specific path through the junction.

scenario car.pass_thru:
    j: town_junction
    in_road: junction_entry with(junction: j)
    out_road: junction_exit with(junction: j)
do serial(duration: [1..10]second):
    enter: car1.drive() with:
        path(in_road) # approach the junction
    inside: car1.drive() with:
        path(j) # traverse the junction
    exit: car1.drive() with:
        path(out_road) # leave the junction

Road elements enable the description of scenarios that can be re-targeted on multiple maps or different locations within the same map.

Arbitrarily shaped movements

As part of the development process and throughout the verification flow, users want actors to perform explicit and fully specified maneuvers.

This need is exemplified by test protocols published by safety assessment agencies such as NHTSA and NCAP. Such test protocols outline in great detail the tested vehicle trajectory with exact distances and angles. The new shape modifier meets this requirement by fully controlling desired aspects of a maneuver.

Figure 1: An Automatic Emergency Breaking test maneuver, according to the NCAP test protocol

 

Figure 2: A Traffic Jam Assist test maneuver, based on NHTSA working document

Although these examples demonstrate the immediate need to constrain the position of an actor, users may want to constrain other attributes (e.g. speed or acceleration). Users can create their desired variations based on the basic shapes added to the M-SDL library:

  • any_position_shape constrains the actor’s position to a specific trajectory.
  • any_speed_shape constrains the actor’s speed to a specific trajectory.
  • any_acceleration_shape constrains the actor’s acceleration to a specific trajectory.
  • any_direct_control_shape constrains the actor’s pedal and brake to a specific trajectory.

For example, assume that a user needs to create an arc shape. The arc may inherit the any_position_shape object. To use this arc shape, the user first creates an instance of the shape in a scenario and then constrains the movement scenario of an actor to the shape using the shape() modifier. In phase 1 of the scenario shown below, car1 drives for 10 seconds on a highway. In phase 2, car1 drives along the arc shape until the maximum duration is reached.

scenario car.drive_with_shape:
    arc1: arc_shape with (radius: 15m, angle: 10deg)
    hw: highway
    car1: car
    do serial:
        p1: car1.drive(duration: 10s) with: path(hw)
        p2: car1.drive() with: shape(arc1)

Traffic flow

Most OEMs sell vehicles that must perform safely regardless of whether traffic flows on the left-hand side or right-hand side of the road. Having to re-write every scenario so that it runs on the other side of the road is a tedious, error-prone, and resource-intensive task, which also doubles the scenario maintenance. M-SDL enables you to write scenarios that can be run on either side of the road and avoid the need of duplicating scenarios.

For example, you can use a test parameter to run the cut_in_and_slow() scenario either on the right-hand side of the road (Figure 5) or on the left-hand side (Figure 6).

Figure 5: Right-hand side traffic flow
Figure 5: Right-hand side traffic flow
Figure 6: Left-hand side traffic flow

Several other modifications were made in the recent M-SDL 20.10 release. We have added clarifications and improved descriptions throughout the LRM. To get the complete list of additions and clarifications, please refer to the log in section 18.1 of the LRM.

Drive safe!

— Sharon

OpenSCENARIO 2.0 is about to be released with innovative expressiveness and abstraction to fuel revolutionary automation in AV development and V&V processes. We wanted to see if we could use this automation today, to efficiently validate products’ adherence to the recent UNECE level-3 ALKS regulation.

Foretellix announced today its ALKS regulatory solution, and jointly with Mobileye – demonstrated the use of this solution, combined with an RSS controller to verify full system regulation compliance.

Figure 1: a description of the flow

What should you expect from the Foretellix ALKS flow?

  • Efficient scenario creation producing a massive number of tests to accurately capture the ODD’s regulation needs. The scenarios auto-adjust to any desired location or maps, explore the unexpected and unknowns with random intelligent value selections, and are easy to steer to meet user needs using simple spreadsheets.
  • Foretify scenario runtime adjustments monitor the unpredictable autonomous vehicle responses and adjust the other actors’ behavior to meet the user intent. This maximizes the value of each simulation, saves compute and human resources, and shortens the overall verification cycle.
  • Coverage is part of the reporting and a dashboard is presenting what regulation scenarios were actually executed and observed, as opposed to what the test requested. This is especially needed for automation functions like level-3 ALKS where the unpredicted Ego responses may invalidate the test goals.
  • Self-checking scenarios for both the regulatory and RSS rules. The pre-provided checks can be easily refined to meet user needs

It is important to note that in addition to the ALKS regulation package, Foretellix also provides a more extensive ALKS Verification & Validation package that goes beyond the regulatory needs and validates ALKS devices in multiple prone to error edge cases and unknown conditions such as other maneuvers, various road circumstances, stationary objects (like parked vehicles) and more. A comparison between the two packages is provided below.

Figure 2: The ALKS regulatory vs. the ALKS V&V packages

To learn more about the ALKS regulatory or verification & validation packages please contact us

Register to receive ALKS scenarios verification code examples.

Given the challenges of verifying and validating (V&V) ADAS and automated highway functions, how would you describe the ideal toolset and methodology?

Over the years, Foretellix has collaborated with many ADAS and AV developers to explore their requirements for a ‘dream solution’. In this post, I’ll summarize their answers and offer a solution to the V&V challenges.

As I’m writing this blog, AAA recommends limiting the use of partially automated driving systems (you can read more here, here and also here). Such safety concerns damage public trust and reduce the perceived value of automation functions. And OEMs are, indeed, devoting large (and growing) efforts to the V&V of these functions.

‘New cars can stay in their lane—but might not stop for parked cars’ (Credit: AAA)

We at Foretellix, always try to improve the robustness of active safety and automated driving systems by evaluating, challenging, and improving existing V&V tools and workflows.

Users describe their ‘dream solution’

Over the past couple of years, we collaborated with multiple customers and partners, asking them to describe their ‘ideal solution’. Below are the most commonly received answers about this tool  – let’s call it ‘Solution-x’.  (BTW, this is quite a list. If you think that you know the requirements, feel free to jump to the Foretellix offered solution)

Produce a massive number of tests effortlessly and out-of-the-box 

  • While ADAS and highway are simpler than say level 4 AV, an infinite number of things can go wrong. Solution-x should generate an unlimited number (as much as I want) of variations of each scenario-based test and uncover edge cases, unknowns (as per ISO 21448 SOTIF) and efficiently find many bugs
  • The result tests should be of high quality and diversified; unique in locations, trajectories, circumstances, and participants that are beyond attribute variations on a hardcoded scenario.
  • My compute is a valuable resource and my engineering resources even more. I want to make sure that the generated scenarios and tests are both feasible and meaningful with minimal redundancy to ensure efficient execution, and to ensure that my team does not spend time debugging invalid scenarios.

Efficiently meet test intent

  • Solution-x should plan the required scenarios upfront, but also adjust the plan in real-time, according to unpredictable decisions of my SW stack.

Provide goals and report completeness

  • With an infinite scenario space, I need a measurable process to know when I have fulfilled my engineering task
  • Solution-x should provide an extensive verification plan combining industry knowledge of ADAS and Highway, past collisions, and standardization requirements
  • The provided plan should be easily extendible to include project-specific goals
  • If an execution fails to meet the test intent, solution-x should report the failure and suggest a different way to achieve the intent
  • Throughout the V&V process, solution-x should clearly indicate the current status, allowing users to focus on unverified areas and eliminate redundant tests

Combine out-of-the-box and user-defined tests

  • Beyond delivering the generic known prone to error ADAS edge cases, solution-x should make it easy for domain experts to leverage their knowledge and create multiple tests with low effort and minimal learning curve.

Be simulator and testing platform agnostic

  • Our company uses multiple simulators for various needs and may change or adopt another simulator in the future. Solution-x should be portable across all simulators and testing platforms we use (e.g. HIL and test-track)

Use a standard language with an industry commitment and roadmap

  • Standards foster a choice of tools and attract talented users searching for career growth.

Support regulation needs and future automation levels

  • For example, the new UNECE SAE level three ALKS regulation that lists specific needs
  • Current investment and methodology should be useful moving forward to higher SAE automation levels

The Foretellix Solution

While the Foretellix technology already solves many of these challenges, the message from users is to also provide an out-of-the-box solution on top of our powerful “do-it-yourself” tool. Working on specific ODD or automation functions they wish to get assistance with, the major questions of what to test? How to test? And when am I done? Considering all the above now is the perfect time to lift the curtain and introduce our solution-x: the Foretellix ADAS and Highway Solution.

Figure 1: Foretellix out-of-the-box ADAS and Highway solution

The ADAS and Highway solution is the first of a family of verification packages that utilize the power of the Foretify platform and extend our multipurpose platform to address the unique challenges of different functions and ODDs. The ADAS and highway content is based on the conducted research of customers careabouts, past collisions, technology limitations, upcoming regulations, safety standards, and disengagement reports to deliver an effective solution for exposing bugs and promote safety.

Our ADAS and Highway solution was designed for ADAS and Highway V&V tasks, including:

  • A verification plan representing many scenario categories, variants, and overall coverage space that must be completed as part of the V&V process.
  • Extensive ready-made regression that activates and mixes 36 abstract M-SDL maneuver scenarios to generate 100,000s of meaningful concrete tests, stationary objects in random location and angles, and other risk dimensions.
  • Self-checking mechanism with pass-fail criteria
  • An intuitive dashboard displaying the executed and, as important, the non-executed scenarios
  • Simplified spreadsheet-based tabular interface to enhance and refine the tests to hit non-verified concerns and make sure we generate meaningful and feasible scenarios

The solution requires a simple initial integration step after which, both expert and naïve users can exercise the solution to achieve a thorough exploration of their devices.

Figure 2: The Shortest Path to Value

The ADAS and Highway Solution pre-provided test regression is captured in a readable table format, that can be easily extended to meet any user needs. Users fill out a spreadsheet indicating the tests to run while either specifying concrete values for test attributes or asking the solver to complete the missing attributes with random values that adhere to abstract constraint requirements. Each table raw is equivalent to coding an M-SDL test.

Figure 3: Simplification and Productivity with Spreadsheets and a sample of the equivalent generated code

To help users analyze regression results, Foretellix offers Foretify Manager, a highly configurable tool that quickly analyzes DUT errors as well as behaviors of the DUT that have not been well tested.

Figure 4: Foretify Manager

In addition to the pre-provided scenarios, the solution enables a powerful approach to explore unknowns and edge-cases using sub-scenario mixing. The ADAS and Highway solution leverages the Foretify solver to allow users to mix the pre-provided content to achieve new unexpected variations. To better understand the implication of this, please review one of the pre-provided mixes, the double-cut-in mix.

Figure 5: Mixes requires solving power to automate (a double cut-in example)

The purple area above is the first cut-in and the orange one constitutes the second.  The Fortify solver can select a location that matches the two cut-ins needs and inserts implicit actions to ensure the proper vehicle coordination that is needed for the larger double cut-in scenario.

Another key benefit of the ADAS and Highway solution is the use of the open language M-SDL (the basis for OpenSCENARIO2.0 concept project and which is now in the process of becoming a standard). The language enables flexible test customization, extra tuning of the success and failure criteria, and powerful refinements of the built-in coverage model. Adopting and learning the M-SDL triggered interest for multiple individuals and teams that search for the next level of automation.

Note that I did not discuss user regulation needs. This is an exciting part of the ADAS and Highway solution which deserve its own dedicated blog ?

Conclusion

As reported above, the industry still struggles with existing ADAS automation functions, which continue to grow in complexity. The ADAS and Highway Solution offers an extensive out-of-the-box solution tuned for finding bugs and promoting safety.

To learn more about the solution please follow this link, and, as usual, comments are welcome either below or via email 

Travel safely,
Sharon

Language is one of the most important tools that we have in our toolbox – we read, write, debug, think, and communicate, in the provided language terms. The Measurable Scenario Description Language (M-SDL) is a highly developed modeling language addressing the challenges of Autonomous Vehicle (AV) and Advanced Driver Assistance Systems (ADAS) verification.

This blog introduces the new features in the recent release of M-SDL (M-SDL 20.07).

At Foretellix, we will continue evolving M-SDL to meet industry needs, with the intention of converging Open M-SDL with OpenScenario2.0.  I want to take this opportunity to thank the multiple customers, M-SDL partners, as well as OpenScenario 2.0 committee members for their priceless feedback. If you wish to get updated or contribute, I recommend joining ASAM by following this link, and the M-SDL community by using this link.

M-SDL version 20.07 introduces several important features, including:

  • Conditional inheritance (an innovative way to model orthogonal categories)
  • Recording data for various purposes
  • Coverage model extensions and their important role for reuse
  • Tracing changes in the value of an expression during scenario execution

If you are not already familiar with the basics of M-SDL, please refer to this link.

Conditional Inheritance

M-SDL allows users to create a partial, abstract description of a scenario together with a set of legality rules. This enables technology to create multiple concrete scenarios from a single abstract description.

In order to thoroughly verify an AV or ADAS, users often need to take into account multiple ways of categorizing actors or scenarios. For example, a vehicle can be a truck, a car or a motorcycle, and at the same time it might be an emergency vehicle or not. This can be easily modeled in M-SDL as follows:

Each one of these categories can have additional attributes. For example a truck might have a num_of_trailers attribute, and an emergency vehicle might have a siren attribute. Once a user randomizes a vehicle, he expects the following:

  • If it is kind randomized to be a truck, it should have all truck attributes and constraints
  • If it is emergency_vehicle attribute randomized to be true, it should have all emergency vehicle attributes and constraints
  • If both the kind is a truck and the emergency vehicle is true, we should get all attributes and constraints of both categories.

Traditional object-oriented inheritance does not support this auto-adapting modeling requirement, but M-SDL’s new conditional inheritance allows users to write the following:

Declaring a vehicle as follows:

allows technology to randomize v1 to be one of many types, including a truck vehicle (with a num_of_trailers attribute), an emergency vehicle (with a siren attribute) or an emergency truck (with num_of_trailers and siren attributes). The resulting type is conditioned on the value of randomized fields, and this is why the feature is called conditional inheritance.

Conditional inheritance allows test-writers to create the required instances simply by constraining the attributes of the parent class. For example:

For more information on inheritance please refer to section 5.4 on the M-SDL LRM.

The Record Feature

M-SDL’s coverage features precisely and objectively capture verification goals. Throughout the verification process, coverage data is used to indicate the completion status of verification goals. However, at times, users might want to record data for other purposes. For example, they might want to accumulate information to:

  • Check that the Ego behaved properly (by storing KPI values, for example)
  • Help debug scenario execution
  • Review the achieved value distribution.

M-SDL 20.07 provides a new record() capability for these purposes. As with coverage, users can flexibly define the required values, sampling time, ranges, and filtering:

For more information on the record() feature, refer to section 7.1.2 of the M-SDL LRM.

Coverage Extensions

In order to facilitate reuse, M-SDL allows users to extend types and classes to match different tests, projects, or ODD needs. A library of reusable scenarios should include base coverage definitions, while allowing users to refine them to meet their needs. For example, if users are working on a specific ODD speed limit, they might want to remove faster speeds from their coverage goals. Going faster than the ODD speed limit could be either impossible or not interesting for the project requirements.

M-SDL 20.07 allows users to easily refine coverage goals by adding or overriding the predefined coverage definitions:

For more information on coverage extensions, refer to section 7.1.4 on the M-SDL LRM

Adding a trace() modifier

For debugging purposes, users might want to trace the value of an attribute or expression throughout scenario execution. A new trace() scenario modifier allows simple exploration of the changes in an expression’s value. For example, users might want to trace the Ego’s speed, the speed of the NPC (the other car), as well as their relative speed, in order to better understand the actual scenario execution:

For more information on trace(), please refer to section 15.4 of the M-SDL LRM.

Many more modifications are part of the recent MSDL 20.07 release. We have added clarifications and improved descriptions throughout the LRM document. To get the complete list of additions and clarifications, please refer to the log in section 18.1 of the LRM.

Drive safe!

— Sharon

The challenge

The growing complexity of automated driving systems such as Lane Keep Assist, Lane Centering and Adaptive Cruise Control challenge existing verification and validation (V&V) methodologies used in the automotive industry. As these systems become more prevalent, bugs surface and failures occur. A collision example took place earlier this month, on June 2nd. A Tesla Model 3 using an automated driving function collided with a stationary truck on a Taiwanese highway.

Video 1 shows a simulated reproduction of the Tesla accident, based on released footage and using Foretify™ and Carla Simulator

Video 1: A simulated reproduction of the Tesla accident using Foretify™ and Carla Simulator

While reproducing failures after they have occurred is useful for verifying a fix, the real need is to eliminate as many failures as possible in advance. The number of possible circumstances and risk dimensions is infinite and many of these are unknowns. The upcoming SOTIF standard (ISO 21448) recognizes the challenge and gravity of the unknowns, as illustrated in Figure 1 below.

Figure 1: Knowns and unknowns illustration

While you may be able to enumerate vehicle maneuver and risk dimension categories such as sensor and camera faults or stationary objects such as cones, puddles or even faded road markings, the possible combinations of these are infinite and cannot be predicted up front. As shown above, existing technologies such as residual risk calculation provide a data-driven grade for the knowns but no formula can calculate the risk of unknown and unpredictable scenarios. The verification plan enumerates all the thought-out scenarios, but what about the unexpected and unpredicted?

As demonstrated in the Tesla incident, the result of the unpredictable nature of a scenario’s space is expensive recalls and compromised safety.

This leads to two frequently asked questions that the automotive industry is busy with in terms of V&V:

  • How do we confront infinity with finite resources and tight timelines?
  • How do we find the unknown unknowns?

The Foretellix solution – Foretify™

Foretellix’s Foretify platform combines use of controlled-random test generation to scale up and search for the unknowns, easy mixing of scenarios and risk dimensions, and powerful data analytics to address this challenge:

  • Use of controlled-random test generation to scale up and search for the unknowns – Foretify allows leveraging a generic constraint solver to achieve a massive number of high quality scenarios. This scale is not reached with a systematic walk over of attribute values. In order to achieve unexpected and edge case scenarios, all attributes are randomized by default. Foretify randomly selects locations on the map, including attribute values such as distance and speed, the timing and order of events, the angle, roll and location of stationary objects and so on. Users can apply constraints (simple rules) to ensure the value legality and consistency of the generated test. For example, Foretify can select either a random location or a location constrained to have enough lanes to accommodate a cut-in and cut-out maneuvers.

Video 2 shows a few automatically generated variations of the same Tesla Model 3 incident. Note that the map location has changed along with other attributes.

Video 2: four different variations for the same scenario generated automatically by Foretify

Per user request, Foretify can generate hundreds of thousands of scenarios in which a truck or a random stationary object resides in random locations, orientations, lanes, color, and so on.

  • Mixing scenarios and risk dimensions – Foretify allows mixing numerous vehicle maneuvers and risk dimensions to achieve the next level of thoroughness. The constraint solver can take sub-scenarios and find multiple proper locations, speeds and circumstances in which they can co-exist. Also, since bugs typically come in clusters, Foretify users can create multiple scenarios and variations of an already discovered bugs.
  • Test table for requesting cross combinations of values – For adding project specific tests, Foretify uses a simple interface Test tables constitute a productivity tool that allows requesting thousands of executions with all cross-value combinations.
  • Powerful Coverage Driven Analytics – Generating a large number of tests requires powerful data analytics and management tools. Foretify provides a dashboard that displays the executed conditions and KPIs in a simple, multi-hierarchical view. The dashboard reflects what was actually executed (given the unpredictable AV responses) and allows analysis of what was tested and verified. The tool also allows removing test duplications by creating a minimal set of tests that achieves maximum V&V coverage. Utilizing functional coverage and metrics to guide the verification efforts is a proven and productive approach to pragmatically explore most of your ODD in a minimal amount of time.

Figure 2 shows an automatically generated metrics report, including both coverage metrics and KPIs.

Figure 2: A metrics report in Foretify

Foretify introduces an innovative approach with scalable random scenario creations, scenario combinations and mixing, cross combination values, and data-analytics to meet the ADAS and AV industry challenge of identifying the unknown unknowns.