Data fusion: you don’t just drop half measurements you weigh them

Update: very simple example of working data fusion in every phone: panorama photos. Stitching images is a hard image-processing problem, adding gyro measurements made it way simpler.

Newsletter from Comet-ML arrived in my mailbox and it nearly forced me to spill coffee, the passage which caught my eyes:

From CVPR 2021: Tesla’s Andrej Karpathy on a Vision-Based Approach to Autonomous Vehicles

Tesla is doubling down on its vision-first approach to self-driving cars and will stop using radar sensors altogether in future releases.

From its inception, Tesla has taken a different approach to most other companies developing a self-driving car that doesn’t rely on Lidar. Instead, they used a combination of radar sensors and 8 cameras placed around the vehicle.

In this talk at CVPR 2021, Andrej Karpathy explains the challenges with this approach and why Tesla has decided to focus on cameras and stop using radar sensors. The main challenge with using different sensor types is sensor fusion. As Elon Musk puts it: “When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion.”

If you have sensor data, from real-world sensors you don’t just discard half of them because others are more precise: consider a scenario on which I worked for several years: flying sensor platform at speed x Mach (or at 30–60 meters per second, but multiple of those) receive a radio signal and the one which is x Mach survives for 15 seconds after that and 30–60 meter platform about a minute or two minutes at max.

You can get very precise measurements using “range-based” techniques, like time of arrival or time difference of arrival, except while they provide high resolution they give you at least two points of intersection ( you need 3 sensors or two platforms to resolve ambiguity) and angle of arrival measurements, which are so “precise” that due to fading you will be lucky to get right quadrant, there is also frequency difference of arrival (but I know you already bored).

The area which studies it for the last 80 years is ELINT, read up Steve Blanks write up what fun it was. Data fusion will result in decision aeroplane choosing attack or manoeuvre out of the range of missiles, make it wrong and you are down. And no, you can’t take more measurements — deal with the ones you have or be at risk of exposure and being shot down.

To comment on the above marketing hype: Neural Networks are bad for data fusion to start with, there are a number of algorithms developed specifically for that problem — bootstrap filter, particle filter, evolutionary algorithms or my own stuff based on Hough Transform. NN will not learn the physics and limitations of radar measurements compared to cameras, but human engineers designed that systems do, we can incorporate human knowledge into the system by assigning appropriate weights to different types of measurements.

There is a wider philosophical piece on Bayesian vs non-Bayesian — parametric estimations for data fusion, but it will need a lot of maths to support discussion and will be published in my blog at www.sci-blog.com


Written on June 29, 2021 by Alex Mikhalev.

Originally published on Medium

Dr Alexander Mikhalev
Dr Alexander Mikhalev
AI/ML Architect

I am highly experienced technology leader and researcher with expertise in Natural Language Processing, distributed systems including distrbuted sensors and data.