Bridging the Gap between Real-world and Synthetic Images for Testing Autonomous Driving Systems

Autonomous driving systems (ADS) use deep neural networks (DNNs) to do tasks like staying in the lane or detecting objects. Usually, these networks are trained on real-world images but tested on synthetic images from simulators. The problem is that synthetic and real images look different, which can make tests less accurate.

To fix this, researchers use image translators—tools that make synthetic test images look more like real images. Common translators are CycleGAN and neural style transfer. This research also introduces a new translator called SAEVAE.

The study asks two main questions:

  1. Do these translators affect the ability of test images to find problems in ADS?
  2. Do they slow down simulation testing?

The results are promising:

  • Translators improve test accuracy by making synthetic images closer to real images.
  • SAEVAE works better than CycleGAN and neural style transfer.
  • Translators do not reduce diversity or the ability to find faults in the test data.
  • SAEVAE adds very little extra time to simulations, so it’s practical to use.
  • Translators help match offline and simulation test results, reducing overall testing costs.

In short, using image translators like SAEVAE makes testing autonomous driving systems more accurate, reliable, and efficient, even when using synthetic images.

You can read the paper here

Comments are closed.