How can AI/ML improve sensor fusion performance?

Sensors are becoming ubiquitous as their price and availability continue to improve. However, sensor data is not that simple and is subject to noise and other interference. The complexity of sensor data has led to sensor fusion which aims to outperform a single sensor by improving signal-to-noise ratio, decreasing uncertainty and ambiguity, and increasing reliability, robustness, resolution, accuracy and other properties. It uses selected sensors to compensate for weaknesses in other sensors or improve the overall accuracy or reliability of a decision-making process. In most applications, computing resources are limited, and artificial intelligence and machine learning (AI/ML) can determine the best fusion (fusing) strategy for sensor data based on real-time operating conditions .
This FAQ reviews various fusion levels and modeling methodologies and introduces some platforms for developing and implementing sensor fusion applications in Industry 4.0, Internet of Things (IoT) and machine vision and image processing applications. Sensor fusion implementations can be divided into three categories based on the level of abstraction:
Data-level fusion simply merges or aggregates multiple sensor data streams, producing a greater amount of data, assuming that merging similar data sources results in increased accuracy and better insights. Data-level fusion is used to reduce noise and improve robustness.
Feature-level fusion uses features derived from multiple independent sensor nodes or from a single node with multiple sensors. It combines these features into a multidimensional vector that can be used in pattern recognition algorithms. Machine vision and localization functions are common applications of feature-level fusion.
Decision-level merging combines the local results of multiple decision classifiers into a single global decision.
Various methods based on machine learning have been proposed to develop an optimal sensor fusion algorithm. One approach compares the results of multiple sensor fusion approaches using Friedman’s test to analyze variance by ranks and Holm’s method to accept and reject hypotheses about the best fusion method iteratively. This approach can work well when a limited number of sensor modalities are used in relatively simple domains such as recognition of simple human activities (SHA). When more complex areas such as recognizing grammatical facial expressions require additional sensors, improved results can be achieved by adding a “generalization step” to the statistical signature dataset step (Figure 1). The generalization step integrates the statistical signatures of datasets from different domains, yielding a larger and generalized set of metadata that can support more complex and powerful sensor fusion activities.

Computational algorithms are used in sensor fusion to take the different sensor inputs and produce a combined result that is more accurate and useful when compared to individual sensor data. Algorithms can be chained to provide successively refined results. Sensor fusion algorithms have common characteristics and can include:
Smoothing uses multiple measurements to estimate the value of a variable, such as global positioning satellite (GPS) positioning, offline or in real time.
Filtration uses current and past measurements to determine the state of a variable, such as speed, in real time.
Estimation of the state of the prediction analyzes previous measurements of variables such as direction and speed in real time to predict a current or future state, such as a GPS position.
Kalman filters
The Kalman filter, a form of linear quadratic estimation, is a common sensor fusion algorithm. It runs recursively, requiring only the current sensor measurements, last estimated state, and known uncertainties. In addition to sensor fusion, Kalman filters are also the basis of some ML algorithms. A Kalman filter works in two stages:
- Prediction estimates current state variables and uncertainties such as environmental and other factors affecting sensor measurements.
- Update based on the next set of sensor measurements when the filter updates the estimated states, weighting the estimates using the calculated uncertainties.
Sensor fusion developers can use a Kalman filter to obtain relatively accurate information from situations with inherent uncertainty and to reduce bias, noise, and accumulation errors. Kalman filters are used in motion control applications to estimate position over time using historical data and secondary sensors such as accelerometers and gyroscopes when data from a primary source such as a GPS signal is not available. Kalman filters are commonly found in mobile robots, drones and other Industry 4.0 systems.
Senor Fusion Platforms for Industry 4.0 and IoT
With the growing number of sensors in Industry 4.0 systems comes a growing demand for sensor fusion to make sense of the mountains of data produced by these sensors. Vendors are responding with integrated sensor fusion devices. For example, an intelligent condition monitoring box is available, designed for machine condition monitoring based on data fusion from vibration, sound, temperature and magnetic field sensors. Additional sensor modalities to monitor acceleration, rotational speeds, shock and vibration can optionally be included.
The system implements sensor fusion through artificial intelligence algorithms to classify abnormal operating conditions with better granularity, resulting in high probability decision making (Figure 2). This Edge AI architecture can simplify the management of big data produced by sensor fusion, ensuring that only the most relevant data is sent to the Edge AI processor or the cloud for further analysis and possible use in training ML algorithms. .

Using AI/ML has several advantages:
- The AI algorithm can use sensor fusion to use data from one sensor to compensate for weaknesses in data from other sensors.
- The AI algorithm can rank the suitability of each sensor for specific tasks and downplay or ignore data from sensors deemed less important.
- Through continuous training at the edge or in the cloud, AI/ML algorithms can learn to identify changes in system behavior that were previously unrecognized.
- The AI algorithm can predict possible sources of failures, enabling preventative maintenance and improving overall productivity.
Sensor fusion kits are also available for IoT applications. Some are designed to the Adafruit “Feather” specification. It is based on a map spec that is part of the “Adafruit Feather Ecosystem”. It includes two small circuit boards, a “Feather” controller and a “Feather Wing” fusion sensor that stacks on top of the nib (Figure 3). The wing contains a high precision barometric pressure sensor, a high SNR MEMS microphone, an inertial measurement unit (IMU) and a microcontroller. The microcontroller is compatible with state-of-the-art AI and can process data from the microphone and other sensors through local sensor fusion algorithms to trigger a notification or an alarm.

The Feather controller, with FreeRTOS firmware installed, serves as an IoT controller with Wi-Fi/Bluetooth connectivity to the wing, so pre-processed or raw sensor data from the wing can be uploaded to the AWS cloud for a further processing.
Sensor fusion kit for radar + camera data
Developers of advanced driver assistance systems (ADAS), autonomous vehicles, smart retail, industry 4.0, robotics, smart building and smart city applications can turn to a system on module (SoM) AI-enabled sensor fusion kit (AI-SFK) that fuses data from a camera and millimeter wave radar for deep learning and video analytics (Figure 4). Camera data and mmWave radar data are complementary and support real-time object detection, classification, range, speed and other parameters. The radar operates at 77 GHz and the 8 MP, 4K color camera can deliver up to 21 frames per second.

This AI-SFK can significantly reduce development times. It has side-by-side panels that show the objects detected with the radar sensors on one panel and the video captured by the camera at the same location on the other panel. It supports a variety of standard hardware interfaces, such as CAN and USB, simplifying the integration of this SFK into the overall system development environment.
Available AI libraries include computer vision, graphics, and multimedia applications. The kit can integrate additional sensor modalities such as thermal imaging and LiDAR and be extended with additional machine learning and deep learning algorithms.
Summary
Sensor fusion combined with AI/ML produces a powerful tool to maximize benefits when using a variety of sensor modalities. AI/ML enhanced sensor fusion can be used at multiple levels in a system, including data level, fusion level, and decision level. The basic functions of sensor fusion implementations include smoothing and filtering sensor data and predicting sensor and system states. Designers have a variety of sensor fusion kits and platforms to accelerate the development of sensor fusion systems in a range of applications including Industry 4.0, IoT, automotive, image processing, etc.
References
AI-Enabled Sensor Fusion Kit, Mistrial Solutions
Choosing the best sensor fusion method: a machine learning approach, MDPI sensors
Integrated sensor platform with AI algorithms—locally from big data to smart data, analog devices
Sensor fusion and artificial intelligence kit, Ainstein
Sensor Fusion SDK: Getting Started with FreeRTOS, Flex
Filed Under: Sensor Tips