ST-7: Show and Tell Demo 7
Thu, 7 May, 14:00 - 16:00 (UTC +2)
Location: Exhibition Hall
ST-7.1: Vehicular Hazard Detection with Multi-Object Multi-Camera Tracking in Open RAN Networks
Hazard detection is an important problem for Intelligent Transportation Systems (ITS), although, developing and deployment of such systems is a complicated task, given human presence in the loop and a large account of non-controlled variables (e.g. traffic levels, weather conditions, etc). In addition, a distributed hazard detection system will require a communication network for a large number of sensors, making this network a crucial element of the system and a potential bottleneck for its performance. Therefore, a suitable platform is needed for the development and validation of vehicular applications that handle both the vehicular and network aspects of the system to seamlessly migrate an application to an urban environment. This demo introduces a distributed hazard detection service integrated to open-RAN that guarantees both reliability and low-latency required for ITS applications. The demo, leveraged by CARLA simulator, shows connected vehicles driving around a virtual environment, while detecting objects using machine learning models and estimating their partial trajectories (a.k.a, tracklets). The information is converted to metadata, which is transmitted to an edge server via the open-RAN network through universal software radio peripherals (USRPs). The server integrates the partial trajectories using Kalman-based methods and computes risk-metrics in real-time to provide timely warnings to connected vehicles. The demo shows an implementation of the open-RAN network using SRS software, introduces a new tracking association algorithm based on Kalman filters and Mahalonabis distance, and implements different risk metrics. such Streetscope collision hazard measure (SHM) or time-to-collision (TtC). Metadata communications is implemented to reduce network traffic and ensure system scalability. The connected vehicle could be driven by the attendees or set to autopilot.
ST-7.2: Sensing-Triggered Adaptive ISAC: High-Speed Data and Instant Emergency Alerts
This demonstration presents an integrated sensing and communications (ISAC) system that can detect and broadcast time-critical emergencies occuring at random times. We emulate a V2X scenario where a roadside base station continuously serves connected vehicles with a high-rate data stream. Beyond standard connectivity, the same infrastructure must recognize priority events such as an approaching emergency vehicle and immediately broadcast an alert to nearby road users, without waiting for any pre-scheduled control interval.
The hardware testbed uses distributed Adalm Pluto+ SDRs to realize the full closed loop in real time. In baseline operation, the base station transmits background data streams to users. In parallel, a dedicated sensing node monitors a defined “sensing zone”. When a target enters this zone (like an ambulance), the sensing node triggers an immediate safety message, and the base station promptly adapts its transmission to broadcast the alert while maintaining the ongoing data service.
To guarantee that the emergency alert is received reliably without disrupting high-speed data, the system employs Dirty Paper Coding (DPC). Unlike time-sharing approaches that stop data to send alerts, or power-sharing approaches that boost the alert while treating interference as noise, DPC pre-cancels the predictable interference created by the simultaneous sensing and data signals. This allows the system to sustain higher data rates while ensuring the safety alert is decoded correctly.
The main novelty is a hardware demonstration of sensing-triggered dynamic coexistence: the network “reflexively” changes its signaling immediately when an event is sensed. This validates the “sensing-for-communication” paradigm for V2X and highlights how signal processing enforces reliability for emergency services even under heavy traffic. The demo is interactive: attendees generate random arrivals by placing a physical object into the sensing zone at any time. A real-time dashboard visualizes the reflex through (i) a radar view confirming detection, (ii) a data view showing the signal constellation adapting to accommodate the alert, and (iii) live performance metrics demonstrating error-free decoding of the emergency message.
Reference: H. Nikbakht, Y. C. Eldar and H. V. Poor, "An Integrated Sensing and Communication System for Time-Sensitive Targets with Random Arrivals", IEEE Journal on Selected Areas in Communications, 2026.
ST-7.3: Live Demonstration of Doppler Radiance Fields (DoRF) for Robust Human Activity Recognition Using Wi-Fi
This proposal outlines a live, interactive demonstration of a novel Wi-Fi–based Human Activity Recognition (HAR) framework built on Doppler Radiance Fields (DoRF)- our recently proposed method accepted at ICASSP 2026 (Paper ID: 9898).
The core idea is that Wi-Fi multipath reflections act as a collection of virtual cameras that observe the same human motion from different unknown angles. Each path provides a one-dimensional Doppler projection of motion. DoRF fuses these complementary views into a structured latent motion representation. To our knowledge, this is the first Neural Radiance Fields (NeRF)-like framework introduced for Wi-Fi sensing.
The demo system consists of one or two commodity Wi-Fi routers configured as passive sniffers, a Raspberry Pi that transmits packets, and a laptop that performs real-time processing and visualization while a participant performs a gesture or activity. The goal is to recognize the performed activity from Wi-Fi Channel State Information (CSI) using our proposed method, DoRF. A custom graphical interface displays the following live components:
1-Real-time CSI monitoring, showing how wireless channels respond to human motion.
2-Online Doppler extraction, dynamically illustrating how multiple virtual viewpoints are fused into a coherent motion field.
3-Live activity and gesture recognition, where DoRF representations are fed into a trained model and the predicted activity is displayed instantly.
-Main Novelty and Innovations:
1-First demonstration of a NeRF-inspired representation for Wi-Fi sensing.
2-Real-time visualization of virtual Doppler cameras and radiance-field-style fusion.
3-Demonstrates how DoRF improves generalization across users and environments, addressing a core limitation of existing Wi-Fi HAR methods.
-Impact to the Signal Processing Community:
With IEEE 802.11bf bringing WLAN sensing into the standard, there is increasing demand for practical and robust wireless sensing solutions. This demo presents our proposed method, which directly addresses the critical challenge of generalization and enables reliable real-world Wi-Fi sensing.
-Interactivity for Attendees:
The demonstration is fully interactive. Attendees actively participate by performing gestures and observing real-time changes in CSI, DoRF visualizations, and recognition outputs. This hands-on experience goes well beyond static plots, making the system intuitive, engaging, and educational for the ICASSP audience.
More information about our work: https://dorf.navidhasanzadeh.com/
Data collected for this study: https://ieee-dataport.org/documents/uthamo-multi-modal-wi-fi-csi-based-hand-motion-dataset-0
ST-7.4: ELLAS: Enhancing LiDAR Perception with Location-Aware Scanning Profile Adaptation
Light Detection and Ranging (LiDAR) is widely used in robotics and automotive systems to perceive the surrounding environment. Conventional spinning LiDARs operate at a constant rotational speed and employ fixed laser scanning parameters, resulting in uniform angular resolution and range across the entire field of view. Such a uniform scanning profile, however, is suboptimal when prior information about static obstacles in the environment is available from street topology maps.
In this demo, we present ELLAS, our situation-aware LiDAR system that dynamically adapts its scanning profile to location-specific street maps. ELLAS jointly optimizes the laser ranging parameters and the instantaneous rotational speed of the spinning platform over different sectors to maximize the scanning envelope around the vehicle. By adapting these parameters to the ego vehicle's location, ELLAS achieves a longer range and a higher angular resolution in critical regions. The live demo allows attendees to see themselves represented as points in the LiDAR point cloud. Participants can directly observe how adaptive sensing produces a higher-density point cloud compared to a standard LiDAR configuration operating at the same frame rate. Finally, the attendees can see how the spin rate profile in ELLAS changes with the location of the ego vehicle and the static obstacles around it.
A video demonstration of ELLAS is available here: https://www.youtube.com/watch?v=DYse8EQgHYI
Note: ELLAS was presented as a live demonstration at the IEEE Sensors 2024 conference, but we have not presented this at any IEEE Signal Processing Society conference yet. We believe that a live demonstration at ICASSP would be highly valuable for introducing location-aware sensing as an emerging research direction. Although this demo will also be included as part of my ETON talk at ICASSP 2026, I believe it would be beneficial to also present it as a show-and-tell demo, which facilitates one-on-one interactions for those interested.
ST-7.5: CONVERGE Multimodal Wireless ISAC Demo: Video-Aided Beamforming
This demonstration presents a cutting-edge Integrated Sensing and Communications (ISAC) system designed to overcome the fragility of mmWave/6G links. While high-frequency bands offer immense bandwidth, they suffer from severe sensitivity to blockage. We demonstrate a "Vision-Aided Beamforming" solution that integrates a LiteOn FR2 Radio Unit (running the OpenAirInterface software stack) with a synchronized Nerian RGB-D camera.
Unlike traditional reactive systems, our setup utilizes an advanced machine learning model to fuse visual data with RF measurements. The demo showcases the system's ability to anticipate blockage events and predict the best beams when the link is severed. We leverage the CONVERGE experimental infrastructure—a unique platform integrating a mobile gNB, a User Equipment (UE), and programmable obstacles—to validate these multimodal algorithms in real-time.
Main Novelty and Innovations:
The primary innovation is the practical, real-world implementation of multimodal Deep Learning for 6G, moving beyond the pure simulations common in this field. We demonstrate:
1. True-Multimodality: Tight synchronization between visual frames (RGB-D) and RF signatures (CSI/SRS).
2. OAI-Integration: A fully functional 5G NR software stack (OpenAirInterface) augmented with vision-based control.
3. Proactive-Intelligence: Machine learning models that predict optimal beam indices and blockage status using visual context and RF measurements.
Impact to Signal Processing Communities:
This demo directly addresses the "Where Signals Meet Intelligence" theme. It bridges the gap between Computer Vision and Wireless Signal Processing, offering a tangible platform for validating AI/ML algorithms in 6G ISAC scenarios. It provides a benchmark for cross-modal learning, crucial for future reliable low-latency communications.
Interactivity for Attendees:
The demonstration goes beyond static graphs. Attendees will interact with a "Digital Twin" dashboard of the CONVERGE chamber. They will be able to:
1. Live Multimodal Dashboard: Visualise synchronized RGB-D video streams and RF measurements in real-time, overlaid with predicted blockage events and signal status.
2. Monitor Beamforming: Watch the system dynamically select and switch between the hardware’s available beams in real-time response to visual cues.
3. Test the Model: Attendees can manually toggle specific blockage scenarios to see how the beam prediction algorithm adapts instantly.