ST-8: Show and Tell Demo 8
Thu, 7 May, 16:30 - 18:30 (UTC +2)
Location: Exhibition Hall
ST-8.1: SensingSP™: An Open-Source Digital Twin for 4D Imaging Radar Design, Simulation, and AI
SensingSP™ is an open-source digital twin framework that enables end-to-end modelling, simulation, and processing of 4D imaging radar systems. The demo showcases how high-level sensing requirements such as maximum range, bandwidth, angular resolution, and update rate are automatically translated into complete FMCW radar parameters including waveform design, PRF, antenna configuration, and MIMO virtual aperture sizing. The system operates directly inside Blender, allowing radar designers and researchers to construct rich 3D environments containing vehicles, pedestrians, and infrastructure.
A key innovation is the integration of electromagnetic ray tracing, signal generation, MIMO demodulation, CFAR detection, and 4D point-cloud formation into a single, interactive pipeline. For every frame, SensingSP computes multipath-aware propagation paths, synthesizes raw ADC cubes, performs range–Doppler–angle processing, and visualizes the resulting detections. CUDA acceleration enables rapid experimentation with radar modes, resolutions, and scene geometries. The framework also includes machine-learning modules for gesture recognition, health monitoring, WiFi sensing, and generative waveform modelling, making it a unified platform for sensing and intelligence, aligned with the ICASSP 2026 theme, “Where Signals Meet Intelligence.”
The demo’s novelty lies in providing a fully open-source, physically grounded digital twin that bridges electromagnetic modelling, advanced signal processing, and AI-based sensing within a single environment.
Attendees will engage directly with the system by manipulating 3D scenes, repositioning sensors, modifying radar parameters, and immediately observing how these changes affect radar signatures, Doppler spectra, angular estimates, and 4D point clouds. This hands-on exploration illustrates key radar trade-offs, such as the impact of bandwidth on resolution, virtual aperture on angular accuracy, and motion dynamics on Doppler structure.
The demonstration will appeal broadly to the signal-processing community, including researchers in array processing, radar, machine learning for sensing, wireless environments, and digital twin technologies. It provides a practical, accessible tool for education, research, and rapid prototyping of next-generation sensing systems.
ST-8.2: Unlimited Sensing Radar: Enhancing Resolution via Modulo ADCs
Conventional radar receivers face a fundamental trade-off between dynamic range and digital resolution: strong reflections saturate the ADC while weak targets are buried in quantization noise. This limits the simultaneous detection of near–far targets and constrains achievable resolution.
We demonstrate an end-to-end radar prototype based on the Unlimited Sensing Framework (USF) that breaks this limitation using analog-domain modulo folding prior to digitization. Instead of saturating, large-amplitude returns are folded and later reconstructed algorithmically, enabling high-dynamic-range acquisition from low-resolution hardware.
The system integrates a custom modulo-ADC front-end, real-time reconstruction and estimation algorithms, and an interactive GUI into a complete acquisition–processing–detection pipeline. Attendees can directly interact with the radar: moving in front of the sensor generates folded measurements in real time, which are visualized alongside the recovered signals and detected targets. The demo allows side-by-side comparison with conventional sampling to highlight clipping and missed detections.
We showcase two configurations:
• Doppler radar: 12.35× dynamic-range expansion with Hz-scale frequency resolution.
• FMCW radar: reliable detection using extremely low-resolution sampling while maintaining 0.1 Hz precision.
These results illustrate how hardware–algorithm co-design with modulo sensing reduces sampling rate and bit depth without sacrificing accuracy.
This demonstration provides a tangible, working example of high-dynamic-range radar sensing and will be of interest to researchers in radar, sampling theory, and low-power sensing systems.
This demo validates the hardware implementation for the upcoming ICASSP conference presentation (ID: 14095), “Enhancing Doppler and FMCW Radars via Unlimited Sensing.”
ST-8.3: Hand Gesture Recognition with USF-Radar
Radar-based recognition of hand gestures, human activities, and motions has experienced a significant research interest due to the ability of radar to operate under diverse lighting conditions while preserving privacy. The radar signal is digitized using an Analog-to-Digital Converter (ADC) with constrained Dynamic Range (DR) and Digital Resolution (DRes). These bottlenecks were solved by the Unlimited Sensing Framework (USF), which inserts a zero-centered modulo operation in the analog domain.
We demonstrate USF-enabled radar for hand gesture recognition, which relies on processing directly on modulo-folded measurements without signal reconstruction.
The demonstration features custom-built modulo ADCs (each for I and Q channels) integrated into a Doppler radar acquisition pipeline, together with a real-time GUI processing pipeline. Attendees can interact with the real system:
- trigger the classifier by performing a hand gesture in front of the radar,
- see the modulo-folded version of their gesture waveform,
- see a wavelet scalogram and the classified hand gesture in real-time.
In https://doi.org/10.1109/RadarConf2559087.2025.11204889, we show 15% classification accuracy improvement achieved by USF-Radar with low-resolution ADCs when compared with a conventional radar.
This demo provides a tangible, hardware-validated example of a radar-based human-computer interface and will be of interest to researchers in sampling theory, classification, low-power acquisition, and radar systems.
ST-8.4: Contact-Free Blood Pressure Waveform Estimation From Radar Signals Via Multimodal Dictionary Learning
Continuous, non-invasive monitoring of arterial blood pressure (BP) is critical for early cardiovascular risk detection. While Frequency-Modulated Continuous-Wave (FMCW) radar offers a promising contact-free alternative to uncomfortable cuff-based devices, accurately estimating the full BP waveform remains a significant challenge. The mapping from radar-sensed chest displacement to arterial pressure is highly non-linear and subject-specific, often obscured by respiration and noise. Conventional approaches typically resort to simple regression for scalar values (SBP/DBP) or "black-box" deep learning, lacking the physiological interpretability required for clinical trust.
This demonstration is based on our work that reconstructs high-fidelity BP waveforms from radar signals within a Multimodal Convolutional Sparse Coding (CSC) framework. Our approach leverages a learned dictionary trained on synchronized Radar, PPG, and BP data to capture shared physiological "cardiac codes". By integrating a non-linear feature extraction backend, we address the complex mapping between chest micro-vibrations and BP amplitude, enabling the precise recovery of systolic peaks and dicrotic notches. This architecture explicitly separates cardiac activity from respiratory artifacts, delivering accurate, full-waveform reconstruction with "glass-box" transparency suitable for clinical analysis.
Our demonstration platform utilizes a visualization system running on practical radar recordings acquired with the TI IWR1443 FMCW radar, operating at 77–81 GHz. The radar is positioned approximately 50 cm above the subject’s chest while the subject lies supine. Ground-truth signals are obtained from a CNAP continuous blood pressure monitor and a finger-clip PPG sensor, synchronized with the radar data for multimodal training and validation. A custom GUI replays recorded data to visualize the signal processing pipeline across three synchronized windows: (1) The Raw Radar Waveform showing original chest displacement; (2) The Filtered Intermediate Signal, displaying the extracted sparse cardiac features and physiological localization maps; and (3) The synthesized BP Waveform overlaid with the ground truth. Below these waveforms, the interface displays physiological metrics, including SBP, DBP, Heart Rate, and Respiration Rate. This setup allows attendees to observe the system's robustness in separating vital signs from noise and its accuracy in tracking hemodynamic changes using pre-acquired experimental recordings.
ST-8.5: EVENT-DRIVEN NEUROMORPHIC SAMPLING AND RANGE ESTIMATION ON RADAR
This demonstration presents an innovative event-driven signal acquisition strategy for frequency-modulated continuous-wave (FMCW) radar, replacing traditional power-hungry analog-to-digital converters (ADCs) with neuromorphic encoders. While standard radar systems rely on uniform Nyquist sampling generating a constant, heavy data stream regardless of whether a target is present our setup performs precise range estimation using opportunistic sampling. We utilize a mmWave radar integrated with a neuromorphic encoder that operates asynchronously, triggering a measurement only when a significant change in signal amplitude occurs. This effectively compresses the signal at the source and ensures the system remains quiet in the absence of targets.
The main novelty lies in the hardware-software synergy of applying neuromorphic principles to RF sensing. By exploiting the sum-of-complex-exponentials (SWCE) structure inherent in dechirped radar signals, we prove that range information can be accurately recovered from sparse, non-uniform event triggers. We pose the signal recovery as a sparse reconstruction problem in the Fourier domain, achieving high-resolution range profiles with significantly reduced data overhead compared to traditional methods.
The impact on the signal processing community is substantial, as it bridges the gap between asynchronous hardware and classical estimation theory. This work provides a practical blueprint for ultra-low-power, "always-on" sensing in resource-constrained environments, such as IoT devices and autonomous micro-robotics, where power efficiency and bandwidth are critical bottlenecks.
Interactivity is central to our showcase. Attendees will engage with a live hardware demo where they can move objects or their hands in front of the radar. An interactive dashboard will visualize the real-time "event stream" alongside the reconstructed range profile. Participants will see firsthand how the systems sampling rate dynamically adapts to movement: generating dense spikes for moving targets and dropping to near-zero activity when the scene is static. This hands-on experience highlights the efficiency of event-based sensing without sacrificing the precision required for high-fidelity range estimation.