Sensor Fusion — It’s all about Prediction

Sensor fusion systems spend a significant amount of resources in predicting the future. Here is why this improves automated driving functions.

Challenging scenario for the protection of pedestrians inspired by the NCAP AEB test catalog. The pedestrian must be recognized as endangered before he or she enters the road so that the vehicle can initiate a harmless emergency stop. Predicting the pedestrian’s motion using different behavior assumptions is a crucial requirement for the sensor fusion system.

Sensor Fusion in a Nutshell

In automated driving, the combination of multiple diverse sensors compensates for individual sensor weaknesses, e.g., a camera better detects pedestrians than a radar while a radar provides long-distance coverage. Converting the different sensor data into a uniform image of the vehicle environment is called sensor data fusion — or sensor fusion for short.

The Right Number of Models

While the example’s motion model seems obvious, other models might be more appropriate:

  • If the same sensor fusion system also needs to support pedestrians, a motion model with a higher degree of freedom could be used, e.g., the constant velocity (CV) model.
  • For bicyclists, the CCA model might be used as well. However, the model parameters probably differ from the parameters used for vehicles.
  • The object’s class is determined by a sensor like a camera at some point in time. Hypotheses that belong to other classes can be removed.
  • to initialize tracks using multiple hypotheses to cope with initialization ambiguities,
  • to apply different sensor models depending on the object class,
  • to efficiently handle the hypotheses to save CPU and memory resources.

Sensor fusion enthusiast and co-founder of BASELABS.