Sensor Models — Key Ingredient for Sensor Fusion in Automated Driving

Eric Richter
4 min readFeb 5, 2021

--

In automated driving, the term sensor model is typically used when it comes to the simulation of sensors, e.g. as part of a validation chain. While sensor models are a major aspect of simulation, they are equally important for the performance of the environmental model and its contained sensor fusion. Scalable sensor fusion architectures allow easy exchange of sensor models so that tested components can be reused and more development resources can be spent in sensor modeling.

What is a Sensor Model?

Regardless of whether a sensor model is used for simulation or sensor fusion, it describes (and typically approximates) how objects like cars, pedestrians and so on interact with a particular sensor. Here, “interacting” involves the two aspects object detectability and object appearance:

Detectability This part of the sensor model describes if a sensor can detect an object. Suppose a camera is part of the sensor setup and an AI-based image detector was trained on images containing views of vehicle rear sides only. Such an image detector would correctly detect vehicles when their rear side is in the image, e.g. vehicles driving in front of the automated vehicle. However, the same image detector would not detect crossing vehicles as it was not trained on side views of vehicles. In practice it is not that black-and-white, rather it is that a sensor can detect one object better than another one, e.g. 9 out of 10 cars are detected if they are driving in front of the automated vehicle and 7 out of 10 cars are detected if they are crossing. Beside this single example, there are many object properties that may influence the detection characteristic of a sensor, e.g. the object aspect angle, distance, size and type. Additionally, the environment could influence the detection performance, e.g. weather or illumination conditions. Finally, the sensor’s host system may have an influence, e.g. depending on the mounting position of the sensor, the detection characteristic may differ. Depending on its application and the required model approximation quality, a resulting detection model may contain an arbitrary selection of the aforementioned properties.

Exemplary radar camera fusion where the radar‘s detection rate gets smaller with increasing object distance. Object confirmation time gets reduced if the radar’s detection characteristic is properly modeled (solid lines) compared to a radar model with constant detection rate (dashed lines).

Appearance While the detectability part of a sensor model describes if a sensor can detect an object, the appearance model describes how a sensor perceives an object. For an example radar sensor, an object could “appear” to the sensor in the form of three values distance, azimuth angle, and Doppler velocity. From this example we can see that the appearance model includes sensor limitations, e.g. a radar can observe the radial part of the object’s velocity (the Doppler velocity) only and if an object is crossing the observed Doppler velocity contains a fraction of the actual object velocity only. Additionally, the appearance or measurement model typically describes the errors in the observed quantities, e.g. a radar may observe the object’s distance with ±1m accuracy. Similar to the detection characteristic, the appearance characteristic may depend on the object itself, e.g. the distance accuracy could be better for cars than for trucks, or it could depend on environment and/or host vehicle conditions.

How Sensor Models Influence Sensor Fusion Architectures

In theory, we could design sensor models that are very close to the real sensor behavior, e.g. physical models or highly complex phenomenological models. In reality, we are often limited in the degree of freedom for sensor modeling. This is mainly due to limitations of the intended execution hardware and a lack of relevant data for the identification and parametrization of such complex models.

Hardware Typical ADAS and L2 functions run on embedded hardware such as the INFINEON AURIX safety processor that limit the choice of computationally tractable algorithms. For object fusion and tracking, the selection is often even reduced to Kalman filter based algorithms and architectures only.

Kalman filters are great as they are well known and their closed form allows for an execution on typical automotive embedded hardware. However, Kalman filters cannot directly make use of arbitrarily complex models. Instead, they are limited to the usage of so-called uni-modal sensor models. If, for example, cars appear differently to the sensor than trucks, then this cannot be handled by a single Kalman filter as this would require using a bi-modal model.

Requirements of a Scalable Sensor Fusion Architecture

Modifications in the architecture of the sensor fusion can compensate for this limitation up to some extend, however, these architecture modifications are model-specific and often require deep modifications of the overall algorithm and code. Especially when it comes to production usage and automotive development processes and regulations like ISO 26262, these modifications become cost and time intensive if applied manually in each and every project.

Separated Sensor Models For a sensor fusion architecture to become scalable, sensor models need to be completely separated from the sensor fusion architecture. Furthermore, the sensor model itself needs to be split into the parts detectability and appearance. By this, sensor models can be developed as dedicated small units that are testable and easily exchangeable. Additionally, this simplifies model identification and parametrization.

Reusability From a mathematical perspective, exchanging a sensor model in a sensor fusion algorithm is relatively easy. However, when it comes to the implementation, the exchange of a sensor model typically requires to adapt vast amount of code at different architectural levels. In particular, if the sensor fusion runs on embedded hardware and needs to ensure functional safety, replacing a sensor model should not require manual adaptation of these pieces of code, nor should it require reassurance of its safe execution. Instead, a scalable sensor fusion architecture needs to support the exchange of sensor models while safely reusing the (much larger) remaining parts.

--

--