ST-9: Show and Tell Demo 9
Fri, 8 May, 09:00 - 11:00 (UTC +2)
Location: Exhibition Hall

ST-9.1: GridSense: Ask Your Power Grid

Kim, Changhun (Pattern Recognition Lab, FAU); Karim, Redwanul (Pattern Recognition Lab, FAU); Conrad, Timon (Institute of Electrical Energy Systems, FAU); Riebesel, David (Institute of Electrical Energy Systems, FAU); Mayerhofer, Lukas (LEW Verteilnetz); Mengele, Fabian (LEW Verteilnetz); Oelhaf, Julian (Pattern Recognition Lab, FAU); Gourmelon, Nora (Pattern Recognition Lab, FAU); Arias Vergara, Tomás (Pattern Recognition Lab, FAU); Jaworski, Michael (LEW Verteilnetz); Maier, Andreas (Pattern Recognition Lab, FAU); Jäger, Johann (Institute of Electrical Energy Systems, FAU); Bayer, Siming (Pattern Recognition Lab, FAU)
GridSense is an open-source Python package and web demo for natural-language exploration of operational power-grid models by grounding large language models (LLMs) in a Neo4j knowledge graph. Grid data is often exchanged as CGMES (Common Grid Model Exchange Standard) / CIM (Common Information Model) RDF/XML: interoperable, but difficult to query directly because it is document-centric, often requiring full-file parsing and offering limited indexing and topology traversal. GridSense converts CGMES/CIM into a queryable Neo4j graph and applies a GraphRAG-style workflow: it detects grid-related intent, generates schema-aware Cypher, executes it on the database, and summarizes strictly from retrieved results to reduce hallucinations and improve auditability. The representation follows a two-layer architecture. A static CIM knowledge graph stores hierarchy, topology, and equipment. A dynamic snapshot overlay links time-indexed operating states such as load (P/Q), terminal power flows, bus voltage magnitude/angle, breaker status, and tap positions to the same assets, enabling efficient time-series queries without duplicating static equipment across timestamps. In the ICASSP 2026 Show-and-Tell, we demonstrate interactive inspection of grid assets and conversational querying, including: (1) transformer utilization trends over snapshots, (2) extracting line parameters/connectivity with Cypher to build the network admittance (Y-bus) matrix, and (3) restoration decision support by computing impedance-weighted shortest paths under current switch states to propose candidate energization routes, leveraging Neo4j traversal and graph algorithms. We also highlight optional fusion of exogenous signals (e.g., weather) to enrich operational context. Impact to signal processing communities: GridSense bridges graph-structured, time-varying grid signals to verifiable retrieval and computation, supporting reproducible topology-aware analytics (matrix construction, temporal monitoring, and graph-algorithm decision primitives). Interactivity for attendees: participants can ask their own questions, see how each question is translated into Cypher, execute it live against the Neo4j graph, and click grid elements to inspect connected topology and time-indexed state; the demo returns retrieved subgraphs and plots derived from query results. The grid dataset is anonymized because it represents critical infrastructure. Finally, we provide “bring-your-own-grid” documentation to import CGMES/CIM XML, validate mappings, build snapshot overlays, and run the same LLM-to-Cypher workflow on a user’s dataset. Demo video: https://www.youtube.com/playlist?list=PL2EIbrGMjR_m2khSEMiUtbKo0dY-LOn3-

ST-9.2: TwinShip: A Decision Support Platform for Maritime Operations

Loukas Ilias, Afroditi Blika, Ariadni Michalitsi-Psarrou, Theodoros Florakis, Anastasia Askouni, Georgios Klavdianos, Giannis Xidias, Dimitris Askounis, Spiros Mouzakitis Decision Support Systems Laboratory, School of ECE, National Technical University of Athens
1. The TwinShip demo presents an AI-powered maritime analytics platform developed within the Horizon Europe TwinShip project, building upon the deployed VesselAI platform from the Horizon 2020 programme. The demo focuses on scalable data integration, signal processing, and operational performance analysis for ships. Heterogeneous maritime data sources, including AIS trajectories, onboard engine and propulsion measurements, and environmental signals, are ingested, harmonized, and stored in a secure datalake. The platform supports energy-efficient navigation, emissions-aware operation, and data-driven decision making through advanced analytics services. As part of its analytical capabilities, the platform includes the Engine-Propeller Combinator Diagram (EPCD) as a supporting representation linking indicative operational and environmental inputs to propulsion and energy performance outputs. The demo does not present digital twins, but highlights analytical foundations supporting TwinShip’s long-term digital twin vision. 2. The main novelty of the demo lies in its modular analytics architecture. TwinShip provides an environment for constructing AI and data analytics workflows, allowing researchers to develop and test algorithms using JupyterLab with direct access to the secure datalake. Dagster is used to orchestrate data processing and model pipelines, enabling transparent experimentation and repeatable results. The platform also incorporates EPCD as a structured analytical representation supporting propulsion and energy performance analysis. In addition, the integration of AI-agent frameworks through LangGraph enhances assisted data exploration and analysis across large datasets. 3. The demo demonstrates how modern signal processing methods, such as time-series analysis, sensor fusion, spatiotemporal modeling, and data-driven estimation, can be operationalized within a real-world maritime analytics platform. By providing structured access to heterogeneous datasets and end-to-end analytics workflows, TwinShip provides a concrete platform for applied signal processing research. 4. Interactivity is a key element of the demo. Attendees interact with live dashboards and user interface components to explore vessel trajectories, fuel consumption patterns, performance indicators, and indicative operating points within the Engine-Propeller Combinator Diagram. Guided demonstrations show how analytics workflows are constructed in JupyterLab, executed and monitored through Dagster, and explored through AI-assisted analysis using LangGraph. This experience provides transparency across the analytics lifecycle.

ST-9.3: Signal-Driven Autonomous Satellite Tasking via Tip-and-Cue: An Interactive Demo

Gil Weissman, Amir Ivry, Israel Cohen Faculty of Electrical and Computer Engineering Technion – Israel Institute of Technology, Haifa, Israel
This Show & Tell demonstration presents an interactive implementation of AI-driven Tip-and-Cue framework for autonomous satellite sensing, focusing on how heterogeneous spatiotemporal signals can drive task formulation and scheduling decisions in a closed loop. The demo is designed to make the behavior of such systems observable and understandable through direct interaction. The demonstration consists of an interactive map-based visual interface shows a geospatial scene with satellite ground tracks and time-varying signal indicators such as trajectory deviations and natural disasters. Such predefined scenarios are initialized in which signals evolve over time and are visualized as overlays on the scene. The signals are converted into tips, probabilistic hypotheses about events of interest. The tips are later converted into cues, candidate satellite imaging tasks. Unlike static task planning approaches, the system continuously updates cue priorities and schedules as signals evolve. Attendees can interact with the system by adjusting a set of high-level parameters using sliders and toggles. These controls modify properties of the signals (e.g., strength, uncertainty, or temporal persistence) as well as system constraints (e.g., sensing priority or satellite availability). Following each interaction, the system updates the generated tips and cues and recomputes feasible acquisition windows and scheduling decisions. The resulting changes are immediately reflected in the visualization, allowing attendees to observe how different signal interpretations lead to different sensing outcomes. The demo provides an intuitive view of how future satellite systems can move from passive signal analysis toward intelligent, self-tasking sensing architectures. The demo is relevant to the ICASSP community as it illustrates how signal processing outputs can be integrated into AI-driven decision systems operating under constraints. By enabling hands-on exploration, the demonstration supports intuition-building around adaptive sensing, interpretability, and signal-driven autonomy.

ST-9.4: esp-data: A Unified Python Library for Large-Scale Cross-Taxonomic Bioacoustics Research

Gagan Narula, Milad Alizadeh, Ellen Gilsenan-McMahon, Paul Laisne, Marius Miron, David Robinson, Masato Hagiwara, Benjamin Hoffman, Sara Keen, Maddie Cusimano, Ines Nolasco, Logan James, Anthony Fine, Eklavya Sarkar, Emmanuel Fernandez, Jules Cauzinille, Gregory Yauney, Diane Kim, Laura Hay, Jane Lawton, Brittany Solano, Matthieu Geist, Emmanuel Chemla, Aza Raskin, Olivier Pietquin Affiliation for all authors: Earth Species Project
Recent bioacoustic foundation models show performance gains with larger, diverse datasets, but the field struggles with fragmented data and incompatible formats. We present esp-data, an open-source Python library developed by Earth Species Project (ESP) that provides a unified, cloud-native interface for loading, transforming, and combining over 35 curated bioacoustics datasets spanning birds (BirdSet, Xeno-Canto and iNaturalist), marine mammals (whale and dolphin phonations with ecotype annotations), primates (gibbon, macaque, and marmoset vocalizations), amphibians (AnuraSet), and insects (InsectSet459) All datasets with permissive licenses are hosted publicly on ESP infrastructure, providing a single source of truth. New datasets will be added periodically, and researchers can easily integrate and optionally store their open-source datasets within esp-data infrastructure with access to the full collection under a consistent and actively maintained interface. The library introduces the following innovations: (1) Registry-based dataset abstraction which enables YAML driven dataset configuration for reproducible ML pipelines. Users can install the library, plugin their research datasets by “registering” and seamlessly work with the library’s tools. (2) Cloud storage and local filesystem access via a unified path abstraction (3) A composable transform system for performing common operations on datasets such as filtering, balanced sampling over features, adding taxonomic information, label encoding; transforms also allow for easy extension (4) Flexible dataset concatenation strategies and simple abstractions like “chaining” heterogeneous sources while preserving annotation fidelity (5) A backend API that integrates with popular libraries such as Pandas, Polars, and the Pytorch Dataloader (6) Iterable-only “streaming” mode that allows users without access to high memory compute infrastructure to iterate over their desired datasets (7) Comprehensive documentation and tutorial notebooks. Our demonstration showcases workflows for training bioacoustic classifiers, real-time dataset discovery, transform pipelines for class balancing, and cross-taxa model evaluation—all through a consistent API that abstracts storage backends and preprocessing complexity. esp-data addresses critical infrastructure gaps in and lowers barriers to multi-species acoustic analysis. The library provides ways to accelerate AI applications in conservation monitoring, ethological research, and the emerging field of interspecies communication.

ST-9.5: UtilityTwin: A Knowledge Graph based Digital Twin for Municipal Utilities

Adithya Ramachandran, Thorkil Flensmark B. Neergaard, Andreas Maier, and Siming Bayer
Municipal utilities work with massive volumes of heterogeneous data, ranging from high-frequency sensor signals (smart meters, SCADA) and static geospatial topologies to unstructured historical records and data from external sources. However, effective analysis is currently hindered because these modalities remain siloed, creating technical barriers that obscure the semantic context required for rapid decision-making. We present UtilityTwin, a digital twin framework that fuses these disparate streams into a unified Knowledge Graph. By modeling the complex relationships between physical assets, sensor time-series, and legacy documentation (structured, unstructured and aerial images), we establish a semantic relationship for the different components of the network. In the interactive demonstration, attendees will access a web-based interface connected to a live data server to navigate a real-world water and heating network. Users can explore the infrastructure behind residential and industrial consumption through visual navigation or a natural language interface. Participants will be invited to pose complex queries, such as "Summarize recent anomalies in the city alongside repair history", to receive real-time, visually corroborated answers. This demonstrates how concepts such as Retrieval-Augmented Generation, LLMs, Agentic framework, and Knowledge Graphs can enable advanced downstream tasks, from demand forecasting to leak detection, by democratizing access to complex signal data. Specifically, for the research community, this addresses the critical challenge of the "context gap." While researchers possess deep technical expertise, they often lack the domain-specific nuances usually expected from partnering utilities. By providing semantic context via a modeled Knowledge Graph, our solution allows researchers to interpret abstract signal anomalies by instantly correlating them with physical asset history and geospatial reality, rather than analyzing data in isolation.