ST-10: Show and Tell Demo 10
Fri, 8 May, 14:00 - 16:00 (UTC +2)
Location: Exhibition Hall
ST-10.1: Seeing Smoke: A Large-Scale Open-Source Multimodal Dataset for Real-Time Wildfire Detection Models.
Early and automatic wildfire detection is critical for minimizing environmental damage, infrastructure loss, and threats to human life. However, real-time detection and monitoring remain challenging due to varying conditions, including smoke, atmospheric distortion, motion, and illumination variability. To this end, we present the Global Wildfire Prevention Dataset (GWFP): a large-scale, open-source multimodal dataset to support robust, efficient, real-time AI-based detection models.
The GWFP dataset is compiled from public sources, including The High Performance Wireless Research and Education Network (HPWREN) and General Directorate of Forestry-Turkey camera networks, as well as drone-based recordings. It consists of approximately 40GB of video and 80GB of image data. The video component is divided into five categories: Flame/Smoke, Negative Samples, Waterdogs, Ember, and Unlabeled sequences. Flame/Smoke videos include flame-only, smoke-only, and flame-to-smoke transitions, captured from stationary cameras and drones. The Ember class contains recordings of airborne embers from active fires, while waterdogs represent natural motion patterns that cause false alarms.
The image component includes eight classes: Flames, Smoke, Negative Samples, Waterdogs, Near-Infrared (NIR) Fire, NIR No Fire, and Unlabeled images. NIR imagery enables cross-spectral analysis and supports multimodal fusion under challenging visibility. A dedicated classification subset with standardized training, validation, and test splits facilitates benchmarking.
Firefighters often report excessive false alarms from video-based smoke detection systems, reducing trust in automated alerts and wasting critical response resources. These errors are often caused by visually similar phenomena, such as clouds, fog, or changes in lighting. This dataset and real-time wildfire smoke detection demonstration aim to advance robust video-based smoke detection and efficient edge AI systems, designed with low-cost FPGA-based deployment in mind, to enable more reliable early wildfire detection and prevention.
ST-10.2: Pre-Characterization of Electromagnetic Side-Channel Leakage Using Publicly Available Information: A Case Study on E-Voting Screens
Wireless side-channel attacks (SCAs) on monitor displays—often referred to as TEMPEST attacks—constitute a class of threats in which an eavesdropper remotely infers sensitive screen information by processing electromagnetic emanations unintentionally emitted by the display. In this demo, we present public TEMPEST, a variant of the TEMPEST threat model in which publicly available system information is leveraged to identify structural signal characteristics ex ante, prior to the physical acquisition of electromagnetic leakage. Such pre-characterized properties can both facilitate subsequent side-channel exploitation and support jamming-based mitigation strategies. We illustrate the public TEMPEST concept through a case study based on the Brazilian electronic voting machine.
This research is motivated by a public call issued by the Brazilian electoral authority aimed at anticipating security issues in the electronic voting process and by a recent judicial decision that revoked a councilman’s mandate after identifying the use of micro-cameras to violate voting privacy. We examine how publicly available information about the Brazilian electoral system can expose electronic voting machines to TEMPEST-related SCAs.
We show that key design characteristics of the Brazilian e-voting interface—such as high-contrast images and minimal on-screen information adopted to improve usability for over 150 million electors—result in a highly distinctive spectral signature. Because these interfaces are publicly available, this signature can be analyzed offline and used to support the automatic tuning of electromagnetic parameters that vary across different e-voting machine models (e.g., critical harmonic frequencies), which is a relevant feature to automating mitigation strategies.
The demo consists of a computer running the official public simulator of the Brazilian electronic voting machine and a software-defined radio (SDR) setup that displays the identified spectral signature and leakage-derived voting information in real time. We believe the public TEMPEST concept presented in this demo can initiate discussion among academia, industry, and government on information forensics challenges and best practices for mitigating signal-processing threats in public systems.
ST-10.3: Real-Time Continuous EEG Authentication: Streaming Neural Biometrics
This demo presents a real-time EEG-based authentication system designed to continuously verify a user’s identity from neural activity. It offers a hands-on demonstration of the feasibility of EEG-based biometrics in everyday conditions, highlighting the distinctive advantages of neural signals over traditional static biometrics for passive and continuous identity verification.
The demo is fully interactive and participants will engage with the authentication system. Participants wear a consumer-grade EEG headset connected to a laptop running the pipeline. Once EEG is detected, the system initializes automatically, enters a short setup phase with live signal-quality feedback, builds a biometric profile directly from the streaming EEG signals, and performs continuous verification over a sliding window of neural signatures. During authentication, a desktop session remains accessible as long as incoming signatures remain consistent with the enrolled user; when signal quality degrades or signatures drift, access is paused or locked, making security decisions directly observable and signal-driven.
A real-time dashboard displays the internal behavior of the pipeline, including (i) live EEG traces and quality metrics, (ii) channel-activity heatmaps, (iii) low-dimensional projections of streaming signatures, and (iv) a “uniqueness” view illustrating separation from a reference cohort.
Novelty for ICASSP lies in the end-to-end demonstration of how streaming biomedical signal processing (online preprocessing + artifact/quality handling), machine learning for multivariate time–space-dependent time series (online embedding and matching), and security-oriented decision logic interact under real-world conditions. This research area at the intersection of neuroscience and cybersecurity remains relatively underexplored and has not seen real-world translation into deployed systems (>20 publications/year since 2010).
Previously presented at NeurIPS 2025 and designed to be accessible to a broad audience, the demo aims to stimulate ICASSP-aligned discussions around neural biometric evaluation, real-time signal-processing constraints (e.g., domain adaptation, frugal real-time artifact rejection), and how EEG authentication fundamentally differs from voice recognition, another signal-based biometric.
For SP impact, the demo illustrates a general blueprint for real-time, closed-loop signal processing, where signal-quality estimation, representation learning, and online decision rules are tightly coupled. It offers a testbed to discuss robustness to non-stationarity, domain shift, and low-SNR multichannel signals, which are common challenges across many “signals meet intelligence” applications.
ST-10.4: Listening to Food: Interactive Multimodal Signal Intelligence for Texture determination of food
Topics: Acoustic Signal Processing, Food Texture Analysis, Multimodal Sensing, Machine Learning, Signal-Based Perception
This Show & Tell Demo presents an interactive experimental platform that demonstrates how signal processing and machine learning can be used to objectively analyze food texture through sound and vibration. Developed within the TETRA KRAK project (Vives University of Applied Sciences and KU Leuven), the demo focuses on controlled food fracture and real-time acoustic intelligence, aligning closely with the ICASSP 2026 theme ‘Where Signals Meet Intelligence’.
Food products such as croissants, biscuits, breaded products and even Belgian chocolate are broken in a compact mechanical setup with 3D-printed probes under well-defined and reproducible conditions. During fracture, three synchronized sensors capture complementary signals: a microphone records airborne acoustic emissions, an accelerometer measures structure-borne vibrations, and a load cell registers force-displacement behavior. All signals are streamed live and processed in real time.
Advanced audio and vibration signal processing methods extract time-domain, spectral, and time-frequency features that characterize the fracture events. Machine-learning models learn relationships between these signals and sensory attributes such as crispness and crunchiness. Live visualizations present both raw sensor data and ML-based texture predictions.
The key novelty of this demo is twofold. First, it demonstrates audio analysis as a new methodological tool for food research, extending acoustic signal processing into a domain historically reliant on sensory panels. Second, it introduces machine learning as a new layer in sensory science, moving beyond feature analysis toward learned models that link physical signals to human perception. The integration of controlled mechanical breaking, multimodal sensing, and ML-based analysis represents a unique approach in today’s research landscape.
Attendees can see and hear food being broken live, listen via headphones to fracture sounds, compare different gradations of crispiness, observe live audio analysis outputs and taste the products being analyzed, offering a multisensory illustration of how signals meet intelligence.
ST-10.5: Open-Source FPGA-Based Echo State Network Nodes for Low-Power Distributed Wildfire Risk Detection
Wildfires pose a serious and escalating threat to ecosystems, economies, and human safety, demanding faster and more reliable detection systems. Traditional monitoring approaches often depend on centralized data processing and external communication links, which can introduce delays and reduce effectiveness in remote forest regions. This demo presents a distributed, low-power sensing architecture for early wildfire risk detection using field-deployable edge devices built around Field Programmable Gate Arrays (FPGAs) running embedded Echo State Network (ESN) models. Each node integrates temperature and humidity sensors with a pre-trained ESN that performs real-time inference directly on the FPGA. To validate the architecture's feasibility for energy-constrained environments, we present comprehensive power efficiency benchmarking, demonstrating that the FPGA-accelerated ESN achieves high inference throughput while minimizing total power consumption. The novelty of the system lies in combining reservoir computing with reconfigurable hardware for scalable, autonomous environmental intelligence at the forest edge, using a fully open-source hardware and software stack to facilitate reproducibility and further research. The demonstration includes an interactive setup in which attendees can manipulate sensor conditions (for example, locally increasing temperature or modifying humidity) and immediately observe the resulting on-board ESN predictions, instantaneous power usage metrics, and device behavior through a live interface. This hands-on interaction highlights the practical potential of low-power, hardware-accelerated machine learning for real-time environmental monitoring and wildfire risk assessment.
see less