Education Short Courses

10-hour courses spanning several days, aimed at providing a deeper and multi-sided understanding of a topic including hands-on experience, providing the material of the course as well as a professional development certificate. Registration to the main conference is not required to attend short courses.

Short Courses Offered
SC-2: Practical Guide to Computational Imaging: From Basics to Brilliance
Wed, 17 April, 10:00 - 12:00
Wed, 17 April, 14:00 - 15:30
Thu, 18 April, 10:00 - 12:00
Thu, 18 April, 14:00 - 15:30
Fri, 19 April, 10:00 - 12:00
Fri, 19 April, 14:00 - 15:00
SC-3: RF Sensing for Wireless AI Perception: Theories, Algorithms, and Applications
SC-4: Multi-Agent Optimization and Learning

Wed, 17 April, 10:00 - 12:00
Wed, 17 April, 14:00 - 15:30
Thu, 18 April, 10:00 - 12:00
Thu, 18 April, 14:00 - 15:30
Fri, 19 April, 10:00 - 12:00
Fri, 19 April, 14:00 - 15:00

Presented by: Lu Fang, Tsinghua University; Jiachen Wu,Tsinghua University; Xun Cao, Nanjing University; Jinwei Gu, Chinese University of Hong Kong; Yifan Peng, University of Hong Kong; Jiamin Wu, Tsinghua University

Course Abstract

Computational imaging stands at the intersection of mathematics, computer science, and physics, seamlessly blending these disciplines into an innovative field. Its primary mission is to mask the invisible visible, encompassing tasks such as photograph a black hole, image around corners, seeing through fogs and so on. These extraordinary capabilities are harnessed through the joint designing of optical hardware and computational algorithms, birthing novel imaging systems. By conscientiously designing both the hardware and software aspects, computational imaging shatters the boundaries of traditional imaging, transcending challenges like dynamic range, spatial resolution, and depth of field.

In contemporary times, computational imaging technology finds widespread utility in both industry and science. Notably, it assumes a prominent role in consumer smartphones, where industry giants such as Apple and Google have established dedicated teams to pioneer computational photography. These ubiquitous pocket-sized cameras necessitate algorithmic innovations due to hardware constraints—limited by their slim form factors and cost considerations, specialized lenses and extravagant designs remain impractical. Beyond smartphone photography, computational imaging technology extends its influence to autonomous vehicles, elevating their perceptual capabilities to superhuman levels. An autonomous vehicle, commonly referred to as a self-driving car, possesses the capability to actively perceive its surroundings and navigate safely even when human intervention is minimal or entirely absent. It is not a requirement that camera sensors on these cars should mimic the human eye: they can surpass it. Moreover, computational imaging technology has made significant contributions to biomedical and medical imaging. This field, focused on visualizing and comprehending biological structures and processes across scales, from the molecular to the organismal, has benefited immensely. Researchers and clinicians now delve deeper into the complexities of biology, leading to groundbreaking advancements in diagnostics, treatment development, and an enhanced understanding of life sciences.

Contributions to the field of computational imaging have emanated from a diverse array of academic disciplines, encompassing signal processing, optics, machine learning, computer vision, computer graphics, applied mathematics, and others. The strides achieved in computational imaging are intrinsically tied to the progress made in these respective fields. Notably, computational imaging stands as not merely a shared interest among these heterogeneous communities but rather as an indispensable tool to confront contemporary scientific challenges.

Syllabus and pre-reading details

This course builds upon our previous university coursework, serving as a bridge between foundational knowledge and the world of Computational Imaging. We'll commence with core Principles of Computational Imaging, ensuring participants have the required background. Next, we dive into Computational Light-field Imaging, exploring principles, capturing techniques, and applications from gigapixel videography to 3D reconstruction. We'll then venture into Computational Holographic Imaging and Display, where wave interference and diffraction create three-dimensional representations. Moving forward, we explore Computational Hyper-spectral Video Acquisition, surpassing traditional color cameras in capturing detailed spectral data. Our journey continues with Computational Fluorescence Microscopy, revolutionizing microscopy for whole-brain and subcellular exploration. Next, we delve into Computational Imaging with Diffractive Optics, covering the design and application of diffractive optical elements (DOEs) in hyper-spectral imaging, holography, and HDR imaging. Finally, we enter Mobile Computational Photography, where advanced mobile image sensors and computational techniques converge for cutting-edge imaging. A detailed outline is listed below.
  1. Principles of computational imaging (Lu Fang, Liangcai Cao, Xun Cao, Jiamin Wu). The first session will introduce fundamental knowledge of computational imaging. Since the field is a co-design of optics and computation, this session will consist of two parts: imaging system and computational reconstruction. In the imaging system part, we will introduce the digital image formation model, including the imaging optics, the pinhole camera model, the image sensor, and the image signal processing pipeline. In the computational reconstruction part, we will introduce the mathematical tools and techniques that facilitate the required “computation” for recovery of image from measurements. It draws on available wealth of knowledge from the areas of signal processing, optimization theory and inverse problems. The contents of this part include the modeling of inverse problems, model-based inversion, data-driven inversion, and hybrid inversion techniques. This session will allow the audience to understand the limitations of the conventional imaging pipeline, and the later sessions will show how the computational imaging philosophy helps go beyond what is conventionally possible. This section will include hands-on coding exercises on classic computational image reconstruction, such as image deblurring, tomographic image reconstruction (Exercise 1).
  2. Computational Light-field Imaging (Lu Fang). Based upon previous fundamental knowledge, this session will introduce the computational light-field imaging technique. We started with the principle of light field, the plenoptic function and its projection in 4D flatland. After that will discuss the light-field capturing techniques, including lenslet array, coded-aperture, and camera array approaches. We will see how these novel-camera designs can capture not just a single 2D image of a scene, but a richer representation of the light rays traveling through space (both space and angle dimensions). Finally, we will showcase the various applications of light-field imaging, such as gigapixel videography, astronomical telescope, large-scale 3D reconstruction, novel-view synthesis and so on. This section will include hands-on experiments on developing prototype camera array system, together with light-field rendering and reconstruction algorithm (Exercise 2).
  3. Computational Holographic Imaging and Display (Liangcai Cao). In this section, we explore light from a wave perspective and its role in holography. Holography leverages wave interference and diffraction principles to capture and reconstruct images, faithfully reproducing objects' three-dimensional aspects through computational wizardry, immersing viewers in a captivating visual realm. We will discuss holographic display systems based on angular spectrum theory, as well as lensless imaging methods based on compressive sensing. Furthermore, we will discuss the role of deep learning technology in holographic imaging and display, shedding light on its pivotal role in enhancing the quality and versatility of holographic visual experiences.
  4. Computational Hyper-spectral Video Acquisition (Xun Cao). In this section, our primary focus is directed towards the wavelength dimension of light. Hyper-spectral imaging emerges as a pivotal tool, endowing us with the capability to extract a richer spectrum of color information from the electromagnetic spectrum. The crux of the challenge in hyperspectral video acquisition lies in the intricate task of capturing high-dimensional spectral information within exceedingly short exposure durations. Traditional spectrometers grapple with this challenge by resorting to temporal or spatial scanning strategies, inevitably entangling themselves in the inherent trade-off between temporal and spatial resolutions. In response to these intricate challenges, our journey will unfold with an introduction of two representative techniques for hyper-spectral video acquisition: tomography imaging spectrometry (CTIS) and coded aperture snapshot spectral imaging (CASSI). Subsequently, we will introduce the Prism-Mask Imaging System (PMIS), a groundbreaking approach designed for the capture of hyper-spectral videos. Our discourse culminates with the unveiling of an ultra-compact hyperspectral light-field camera featuring a transversely dispersive metalens array.
  5. Computational Fluorescence Microscopy (Jiamin Wu). Computational imaging methods have now been widely applied in microscopy for broad biomedical applications. Long-term subcellular intravital 3D imaging in mammals is vital to study diverse intercellular behaviors and organelle functions during native physiological processes. However, optical heterogeneity, tissue opacity, and phototoxicity pose great challenges, leading to the tradeoff between the field of view, resolution, speed, and sample health in traditional microscopy. In this session, we will discuss recent developments and application of computational imaging methods to address these problems in fluorescence microscopy. Various applications will also be introduced, including brain-wide neural recoding in mice at single resolution, 3D voltage propagations in Drosophila larval neurons, membrane dynamics in zebrafish embryos, and large-scale cell migrations during immune response and tumor metastasis in mice. This section will include hands-on coding exercises on super-resolution microscopy (Exercise 3)
  6. Computational Imaging with Diffractive Optics (Yifan Peng). This section will delve into the transformative field of computational imaging with diffractive optics. The investigation commences with a profound examination of the fundamental principles underlying computational imaging with diffractive optics. It delves into the design and implementation of diffractive optical elements (DOEs), highlighting their capacity to manipulate light in unconventional ways to capture intricate details and overcome limitations inherent to conventional optics. We further discuss the computational algorithms that synergize with diffractive optics to reconstruct images. These algorithms, harnessed through iterative optimization techniques or deep neural networks, enable the recovery of rich, multi-dimensional information from complex optical systems. Through comprehensive case studies and applications, we elucidate the diverse range of domains wherein computational imaging with diffractive optics has made substantial inroads, including hyper-spectral imaging, holography, HDR imaging, etc. This section will include hands-on experiments on developing prototype camera system and reconstruction algorithms with diffractive lens instead of conventional lens (Exercise 4).
  7. Mobile Computational Photography (Jinwei Gu). This scholarly investigation delves into the dynamic realm of mobile imaging, characterized by the convergence of cutting-edge mobile image sensors and advanced computational imaging techniques. The focal point of this exploration is the enhancement of images and videos on mobile devices, representing a critical frontier in contemporary mobile photography. The inquiry commences with a comprehensive examination of the latest advancements in mobile image sensors. These innovations usher in a new era of mobile photography, characterized by heightened sensitivity, improved dynamic range, and superior low-light performance. Subsequently, our exploration delves into the sophisticated domain of computational imaging techniques tailored for mobile devices. Leveraging the computational prowess of modern smartphones, these techniques are instrumental in elevating image quality through mechanisms such as noise reduction, dynamic range expansion, and real-time image enhancement

Hands-on/Experimental Components

The course will offer participants a comprehensive experience encompassing practical coding exercises (Python and MATLAB), along with engaging in prototype development experiments that incorporate essential components such as devices, algorithms, and datasets. Our exercises draw inspiration from the presenters' relevant courses, aligning with real-world challenges and applications. For example, for Day 1 and Day 2, we will utilize the exercises form the Media and Cognition course (Tsinghua University 40231253) and Machine Vision course (Tsinghua University 80231302)

By actively engaging in these practical activities, participants can not only grasp the foundational concepts but also gain valuable experience in applying computational imaging techniques to tackle complex problems. This hands-on approach empowers learners to develop both their theoretical knowledge and practical skills in the field

Prerequisite & Intended Audience.

The primary audience comprises senior undergraduate students, early-stage graduate students, and budding researchers across various disciplines, including Optics Engineering, Electronic Engineering, Computer Science, Medical/Biomedical Engineering, Physics, and Mathematics, etc. Importantly, prior research experience in computational imaging is not a prerequisite. A foundational grasp of basic linear algebra and programming suffices as the entry point for prospective learners. By working through real-world scenarios and challenges, participants will not only sharpen their technical skills but also cultivate the creativity and problem-solving acumen required in this field. Moreover, the course encourages collaborative learning, fostering an environment where participants from diverse academic backgrounds can share insights and ideas.

Presenter Biographies

Lu Fang is currently an Associate Professor in Dept. of Electronic Engineering, Tsinghua University. She received Ph.D from Hong Kong Univ. of Science and Technology in 2011, and B.E. from Univ. of Science and Technology of China in 2007. Dr. Fang performs multidisciplinary researches in the fields of Computational Imaging and Neuromorphic Computing. She published 50+ journal papers (Nature, Nature Photonics, Nature Machine Intelligence, Nature Methods, IEEE TPMAI etc.) and 60+ conference papers (CVPR, ICML etc.). She used to serve in MMSP-TC (2015-2017), and serve as ACM MM 2021 Grand Challenge Co-chair, IEEE ICME 2023 Demo Co-chair, and the leading organizer of GigaVision Challenges/Workshops in CVPR19, ECCV20, ICCV21, and ACM MM22 etc. Dr. Fang is IEEE Senior Member, and serves as Associate Editor for IEEE TIP and OSA Optica.

Jiachen Wu received his BS degree in Optoelectronic Information Engineering from Xi'an University of Posts and Telecommunications, Xi'an, China, in 2012, and the M.S. degree in Optical Engineering from Shenzhen University, Shenzhen, China, in 2016, and the Ph.D. degree in Optical Engineering from Tsinghua University, Beijing, China in 2022. He was a Visiting Scholar with TU Dresden at Germany from May 2021 to April 2022. Currently he is working as a Research Assistant at Department of Precision Instruments, Tsinghua University. His research interests include lensless imaging and digital holography.

Xun Cao received the B.S. degree from Nanjing University, Nanjing, China, in 2006, and the Ph.D. degree from the Department of Automation, Tsinghua University, Beijing, China, in 2012. He held visiting positions with Philips Research, Aachen, Germany, in 2008, and Microsoft Research Asia, Beijing, from 2009 to 2010. He was a Visiting Scholar with the University of Texas at Austin, Austin, TX, USA, from 2010 to 2011. He is currently a Professor with the School of Electronic Science and Engineering, Nanjing University. His current research interests include computational photography/imaging, especially computational spectral imaging.

Jinwei Gu is currently an Associate Professor at the Chinese University of Hong Kong. His research focuses on low-level computer vision,computational photography, computational imaging, and appearance modeling. He obtained his PhD in 2010 from Columbia University. Before joining CUHK, he has worked in industry for many years developing algorithms and software for mobile computational photography, autonomous and assisted driving, and AR/VR. He also serves or had served asan associate editor for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and IEEE Transactions on Computational Imaging (TCI), the industrial chair for ICCP, area chair for CVPR, ICCV, ECCV, and NeurIPS.

Yifan Peng is an assistant Professor at the University of Hong Kong Electrical and Electronic Engineering and Computer Science . He received my PhD in Computer Science from the Imager Lab, the University of British Columbia. He received both his MS and BS in Optical Science and Engineering from State Key Lab of Modern Optical Instrumentation, Zhejiang University. His research interest lies in the interdisciplinary field of Optics, Graphics, Vision, and Artificial Intelligence, particularly with the focus of: Computational Optics, Sensing, and Display; Holographic Imaging/Display & VR/AR/MR; Computational Microscope Imaging; Low-level Computer Vision; Inverse Rendering; Human-centered Visual & Sensory Systems.

Jiamin Wu is an assistant professor in the Department of Automation at Tsinghua University, and PI at the IDG/McGovern Institute for Brain Research, Tsinghua University. His current research interests focus on computational imaging and system biology, with a particular emphasis on developing mesoscale optical setups for observing large-scale biological dynamics in vivo. In the recent 5 years, He has published more than 40 journal papers in Nature, Cell, Nature Photonics, Nature Biotechnology, Nature Methods, and so on. He has served as the Associate Editor of PhotoniX and IEEE Transactions on Circuits and Systems for Video Technology, and Guest Editor in Chief of Light: Science & Applications.

Wed, 17 April, 10:00 - 12:00
Wed, 17 April, 14:00 - 15:30
Thu, 18 April, 10:00 - 12:00
Thu, 18 April, 14:00 - 15:30
Fri, 19 April, 10:00 - 12:00
Fri, 19 April, 14:00 - 15:00

Presented by: Chenshu Wu, The University of Hong Kong; He Henry Chen, The Chinese University of Hong Kong; Haitham Hassanieh, EPFL; Yasamin Mostofi, University of California, Santa Barbara; Beibei Wang, Origin AI; Sandeep Rao, Texas Instruments

Course Abstract

The past decade has witnessed the conceptualization and rapid development of wireless sensing, i.e., sensing using RF signals such as Wi-Fi, mmWave, UWB, RFID, etc. Remarkable advances have been achieved in both academia and the industry. Take Wi-Fi sensing as an example. Wireless sensing has turned Wi-Fi devices from a pure communication platform to a ubiquitous sensing infrastructure. Wi-Fi sensing leverages ambient Wi-Fi signals to analyze and interpret environmental contexts, achieving human sensing in a wireless, contactless, and sensorless way without using cameras, wearables, or dedicated sensors. Wi-Fi sensing is revolutionizing many fields like healthcare, home/robot automation, elderly care, smart cars with applications such as presence detection, vital sign monitoring, sleep monitoring, fall detection, gait recognition, and gesture control, just to name a few. To date, Wi-Fi sensing has been successfully commercialized as world-changing products (e.g., Verizon Home Awareness, Wiz SpaceSense, LinkSys Aware, all partnered with Origin AI, an industry leader in Wi-Fi sensing), and there are still many more innovative applications to further explore. This sensing capability adds a brand new dimension to the functions, capabilities, and applications of all Wi-Fi systems, inaugurating a new WLAN sensing protocol called IEEE 802.11bf and spawning an emerging paradigm of integrated sensing and communication, an increasingly hot topic and the next big move in wireless communication systems. Besides Wi-Fi, other RF technologies, especially millimeter-wave (mmWave) signals have also been extensively exploited for non-contact sensing, fostering applications of pose estimation, imaging, vital sign monitoring, etc., using low-cost, compact mmWave radars. In the broad context of machine perception, wireless sensing technologies extend conventional modalities beyond the visible spectrum (color or infrared images) to the RF spectrum (Wi-Fi, mmWave, 5G, UWB, etc), enabling a new Wireless AI Perception that works in absolute darkness, through occlusions, and with privacy protection.

Over the years, a range of theories, models, methods, and applications have been proposed and developed to achieve these technological advances. Regardless of the underlying modalities, wireless sensing mostly leverages channel information, commonly represented as the Channel Impulse Response (CIR) or Channel State Information (CSI), which characterizes how RF 1 signals interact with the environment and the targets therein during propagation. While the wireless channel has been well modeled for wireless communication, sensing with channel measurements on commodity devices like off-the-shelf Wi-Fi or compact indoor mmWave radars involves various new challenges. Many conventional techniques used in radar signal processing and wireless communications do not directly apply and need to be upgraded. For example, channel parameter estimation of Time-of-Flight, Angle-of-Arrival, Doppler Speed, etc., becomes difficult, if not prohibitive, due to the limited bandwidth, small antenna arrays, significant phase offsets on commodity Wi-Fi and/or compact radar devices. Therefore, in addition to further development of traditional sensing techniques, brand new theories and methods have been proposed, such as statistical electromagnetic approaches. On the other hand, as deep learning demonstrates impressive effectiveness in various domains, deep wireless sensing (i.e., wireless sensing with deep learning) also attracts numerous attention, and remarkable advances have been achieved to tailor data processing, feature extraction, and model design, particularly for the unique characteristics of wireless data, e.g., complex, high-dimensional, time-frequency domains, non-visible, etc.

As witnesses to, and also important participants of such historical advancements, we believe that it is the right time and of great interest to deliver this increasingly hot topic of wireless sensing to the signal processing community. Many design principles of related algorithms are signal-processing based, but wireless sensing has not been systematically introduced or embraced by the broader signal-processing community at ICASSP. Wireless sensing is a suite of increasingly popular cross-disciplinary research areas that such sister communities as biomedical engineering, computer vision, communications, and ubiquitous computing have been embracing with open arms in recent years. Yet the key building blocks touch multiple technical areas of the signal processing community—it is truly “Signal Processing Inside” and a perfect fit for the ICASSP audience. This short course will raise more awareness of the exciting research and development opportunities in the signal processing community on wireless sensing, stimulate discussions and explorations, and help the signal processing community play strong roles in this emerging area. The vital roles of signal processing to be discussed in the tutorial can help students appreciate the importance of signal processing through timely and appealing examples, fostering an opportunity to engage signal-processing researchers and industry technologists in this promising yet technically challenging area.

The goal of the course is to introduce (1) the concepts, fundamental principles, promising applications, and major challenges of wireless sensing, with a focus on Wi-Fi sensing and mmWave sensing; (2) how signal processing techniques are tailored to extract contexts of interest from often noisy and weak RF sensing signals, e.g., extremely weak breathing/heartbeat motions under NLOS conditions and multipath-rich environments. This includes algorithms and techniques adapted from traditional channel parameter estimation methods to fit the sensing signals as well as statistical sensing approaches designated for wireless sensing; (3) how to apply deep learning techniques and design new RF-tailored neural networks for the unique wireless sensing data, including aspects of data representation, data augmentation, and generation, feature extraction, model design, etc.

Syllabus and pre-reading details

The course will be mainly built upon our previous courses, including COMP3516 Data Analytics for IoT by Chenshu at HKU, IERG5110 Signal Processing in Wireless Communications and Sensing by Henry at CUHK, and the online tutorials by Sandeep Rao at TI, in addition to an ICASSP 2023 Tutorial (No Touch Needed: Contact-Free Physiological Sensing for Fitness and Healthcare Using Cameras and RF Signals). The course will start by introducing the concepts, basic principles, popular applications, and common modalities (e.g., Wi-Fi, mmWave radars, UWB radars, etc) for wireless sensing using RF signals. Then we focus on two mainstream sensing signals, i.e., FMCW mmWave radar and commodity Wi-Fi signals. For each theme, we will cover aspects of the hardware platform and software tools, theories and models, methods, and applications. We will start with model-based approaches using signal processing techniques, followed by data-driven approaches using deep learning. Overall, the course features four sessions in total, with one session of 2.5 hours each day. A detailed outline is listed below.

  1. Introduction to Wireless Sensing: Applications, Signals, and Principles (Chenshu Wu, Henry Chen, Sandeep Rao) The first session will introduce the basics of Wi-Fi sensing and mmWave sensing. We will start with the basic concepts and a wide range of applications of wireless sensing, together with a comparison of wireless sensing modalities against other perception methods like computer vision. We will then introduce the fundamental properties of Wi-Fi OFDM systems and Wi-Fi CSI, followed by explaining the unique challenges and difficulties in realizing practical Wi-Fi sensing, such as the limited bandwidth, small antenna size, and various system imperfections. We will then accordingly introduce techniques to effectively expand the bandwidth, extend the antenna size, and mitigate synchronization errors. The last part of this session will cover FMCW mmWave signals and basic radar processing techniques, such as range, velocity, and angle estimation.
  2. WiFi Sensing with Signal Processing: From Geometric to Statistical (Yasamin Mostofi, Beibei Wang): In this section, we will focus on Wi-Fi sensing and dive deeper to understand a body of signal processing techniques that have been developed for Wi-Fi sensing. We will start with Wi-Fi channel modeling, including reflection models and scattering models, and introduce techniques to clean the noisy Wi-Fi CSI or obtain finer-grained CSI. We will then present two mainstream approaches for Wi-Fi sensing: Geometric approaches that aim to resolve individual multipath components and estimate channel parameters such as ToF, AoA, Doppler Frequency Shifts, etc, from Wi-Fi CSI, and statistical approaches that utilize all multipath components and analyze their statistical behaviors from the perspective of statistical EM waves. We will also introduce the latest advance in WiFi imaging based on the Geometrical Theory of Diffraction. Through the two categories of sensing approaches, we will introduce to the audience how to build Wi-Fi sensing applications, such as passive tracking, motion detection, breathing rate estimation, speed estimation, fall detection, gait analysis, etc, all using signal processing techniques without any intensive training or computation
  3. Wireless Sensing with Deep Learning (Chenshu Wu, Haitham Hassanieh, Henry Chen): From this session, we will transit from model-based approaches using signal processing techniques to data-driven approaches using deep learning. Wireless sensing with conventional signal processing approaches excels in explainability and generalizability. However, due to the complicated multipath effect and poor multipath resolution, they often do not achieve high performance in complex real-world settings for fine-grained applications. Deep learning-based approaches promise better performance but face new challenges. In this session, we will show how to design deep neural networks effectively for wireless sensing. We will talk about how to represent complex wireless data and how to extract effective features for learning, followed by how to do data augmentation and generation to overcome the data scarcity issue. We will then show how to customize neural networks by considering the unique properties of wireless data in terms of complex values, time-frequency domains, etc. We will cover the applications of deep learning in wireless sensing for both Wi-Fi sensing and mmWave sensing, and discuss general designs for different wireless sensing modalities.
  4. mmWave sensing: Modeling, Learning, and Hands-on Practicing (Chenshu Wu, Sandeep Rao) In the last session, we will center our focus on mmWave sensing using commodity FMCW radars, e.g., the TI mmWave radars. We will again start with modeling-based approaches, and show how we can enable various applications using mmWave sensing, such as multi-target vital sign monitoring, tracking, imaging, material identification, etc. We will then show how deep learning can be applied to mmWave signals for advanced applications, such as speech enhancement and separation, super-resolution imaging, target classification, etc. We will conclude by touching on cross-modal/multi-modal sensing and learning, e.g., fusion of RF and visual signals for advanced perception. Lastly, we will have a hands-on lab session with real hardware (TI mmWave radar) for the audience to apply the learned knowledge and skills to build a real system for a gesture recognition application using the TI IWRL6432 evaluation module (The required hardware components will be provided).

Hands-on/Experimental Components

This course will provide programming exercises (using Python) with supporting codes and datasets. We will have a series of labs for the audience to understand the basics of wireless sensing, and also provide a public dataset to build learning-based sensing solutions. More importantly, this course will also feature hands-on sensing experiments with real hardware. We will use the IWRL632BOOST evaluation module which uses the low-power mmWave Radar sensor from Texas Instruments (IWRL6432). The module will come with preloaded firmware which will perform basic radar pre-processing. The pre-processed streamed output will be used to develop a gesture recognition application. Some reference codes (Pytorch/KERAS scripts) will be provided for quick annotation and training. The participants will be divided into a few groups and each group will interact with the radar and observe how the radar reacts to different gestures. They will then attempt to develop a gesture recognition system.

Prerequisite & Intended Audience

Centering around the principle and practice of wireless sensing, the course will be the first of its kind. The course will introduce the concepts, principles, models, signal processing techniques, deep learning methods, and applications, as well as industrial insights of wireless sensing. By delivering this course, we hope to promote the increasingly hot wireless sensing to the signal processing society, attracting more researchers and helping SP society play a stronger role in this emerging area. The content and knowledge delivered by this course will have great value to young researchers and graduate students. The target audiences of this course are senior undergraduates, entry-level graduate students, young researchers and engineers, and industrial practitioners in relevant areas, especially like signal processing, sensing, perception, robotics, machine learning, and applications.

Research experience in wireless sensing is not required to take this course. Neither is experience with hardware required. Students are expected to be familiar with basic linear algebra, digital signal processing, wireless communications, and programming (in Python)

Presenter Biographies

He (Henry) Chen received a Ph.D. degree in Electrical Engineering from The University of Sydney, Sydney, Australia, in 2015. He was a Research Fellow with the School of Electrical and Information Engineering, The University of Sydney. In July 2019, he joined the Department of Information Engineering at the Chinese University of Hong Kong, where he is now an Assistant Professor. Dr. Chen's current research interest includes wireless communications, wireless sensing, and their applications in robotic systems. Dr. Chen is serving on the editorial board of IEEE Transactions on Wireless Communications, and he served on the editorial board of IEEE Wireless Communications Letters 2020-2022. He has delivered tutorial presentations at esteemed conferences like IEEE GLOBECOM 2020 and IEEE/CIC ICCC 2020. In addition, he has co-organized workshops for IEEE GLOBECOM in 2016, 2017, and 2020, as well as for IEEE ICPS 2019 and IEEE INDIN 2020.

Haitham Hassanieh is an associate professor in the school of Communication and Computer Science at EPFL. His research is in the areas of wireless networks, mobile systems, sensing, and algorithms. Before joining EPFL, he was a professor at University of Illinois at Urbana Champaign (UIUC). He received his PhD and MS degrees in EECS from MIT in 2016 and 2011 respectively. He received his BE in CCE from AUB. His PhD thesis on the Sparse Fourier Transform won the ACM Doctoral Dissertation Award for best computer science thesis in the world, the Sprowls best thesis award at MIT, and TR10 Award for top ten breakthrough technologies in 2012. His research has received best paper awards at ACM SIGCOMM and ACM MobiSys. He is also the recipient of the NSF Career Award, the Google Faculty Research Award and the Alfred Sloan Foundation Fellowship.

Yasamin Mostofi received the B.S. degree in electrical engineering from Sharif University of Technology, and the M.S. and Ph.D. degrees from Stanford University. She is currently a professor in the Department of Electrical and Computer Engineering at the University of California Santa Barbara. Yasamin is the recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), the 2016 Antonio Ruberti Prize from the IEEE Control Systems Society, NSF CAREER award, and the IEEE 2012 Outstanding Engineer Award of Region 6 (more than 10 Western U.S. states), among other awards. She is a fellow of IEEE. Yasamin's research is multi-disciplinary, in the areas of wireless systems and robotics/control. Yasamin has served in many different professional capacities over the years, including serving on the Board of Governors of IEEE CSS, serving as a senior editor for IEEE TCNS, and serving as a program co-chair for ACM MobiCom 2022, among others.

Sandeep Rao is with Texas Instruments(TI) where he leads an R&D group on mmWave signal processing. His research group works on identifying new applications for mmWave sensing and is actively involved in defining the next generation of TI mm-wave radar devices. He has a Masters from the University of Maryland and a Bachelors from the Indian Institute of Technology Madras. He has 30 granted patent (and an additional 20+ pending), in the field of mmWave sensing and GPS.

Beibei Wang received the B.S. degree in electrical engineering from the University of Science and Technology of China in 2004, and the M.S. and Ph.D. degrees in electrical engineering from the University of Maryland, College Park in 2008 and 2009, respectively. In 2009-2010 she was a postdoctoral research associate with the University of Maryland, College Park. In 2010-2014, she was with Qualcomm Research. Since 2015, she has been with Origin Wireless Inc., where she is Vice President of Research. Her research interests include wireless sensing and positioning. She has over 100 technical papers and co-invented over 100 patent applications with 55 granted.

Chenshu Wu is an Assistant Professor in the Department of Computer Science at the University of Hong Kong, where he leads the HKU AIoT Lab. He served as the Chief Scientist of and now consults Origin AI. He was an assistant research scientist in the ECE Department, University of Maryland, College Park, after serving as a research associate at Tsinghua University and at Princeton University. He received his B.E. and Ph.D. degrees from Tsinghua University in 2010 and 2015, respectively. His research focuses on wireless and mobile AIoT systems at the intersection of wireless sensing, ubiquitous computing, and the Internet of Things. Dr. Wu is the recipient of five best paper awards, the NSFC Excellent Young Scientists Fund (HK & Macau), the NAM Healthy Longevity Catalyst Award, and the CCF Outstanding Doctoral Dissertation Award. Related to the proposed course, he co-organized the Wireless AI Perception workshop at CVPR’22, the CPD workshop at UbiComp’21, a special session on wireless sensing at ICASSP’21, and a tutorial at ICASSP’23.

Wed, 17 April, 10:00 - 12:00
Wed, 17 April, 14:00 - 15:30
Thu, 18 April, 10:00 - 12:00
Thu, 18 April, 14:00 - 15:30
Fri, 19 April, 10:00 - 12:00
Fri, 19 April, 14:00 - 15:00

Presented by: Stefan Vlaski Imperial College London; Ali H. Sayed, École Polytechnique Fédérale de Lausanne

Course Abstract

One of the defining characteristics of the 21st century has been the proliferation and dispersion of data and computational resources. These trends have enabled a paradigm shift in many areas of engineering, signal processing, and beyond, allowing us to design intelligent systems which build models and perform inference directly from data. These techniques have led to tremendous progress across a number of signal processing areas ranging from speech and image processing to recommender systems, forecasting, communication systems and power allocation.

At the same time, data is increasingly available in dispersed locations, rather than powerful central data centers. Data is generated and processed at the edge, on our mobile and IoT devices, in sensors scattered throughout “smart cities” and “smart grids”, robotic swarms, and vehicles on the road. In order to benefit from these vast and distributed data sets while preserving communication efficiency, privacy and robustness, we need to employ distributed learning algorithms, which rely on local processing and interactions. When properly designed, these algorithms are able to exhibit globally optimal behavior and match the performance of benchmarks relying on central aggregation of raw data.

The development of algorithms for distributed signal and information processing has been an active area of research for the past 25 years [1, 2], reaching now a critical point where it has reached a level of maturity that allows a cohesive overview to be presented in a classroom. At the same time, the recent emergence of federated learning [3] has galvanized interest in the area of distributed learning beyond the signal processing community, and led to rapid adoption by major industry players including Google and Apple.

This short course on “Multi-Agent Optimization and Learning” provides attendees with tools for distributed optimisation and learning that allow them to design intelligent distributed systems. Emphasis is placed on why algorithms work, how we can systematically develop them, and how we can quantify their performance trade-offs. We also show how to use this information to drive design decisions. This course will bring students and researchers in the signal processing community up to speed with an active area of research with solid foundation but many open questions. Practitioners will be provided with fundamental understanding of distributed multi-agent systems, allowing them to identify, evaluate and exploit their value in respective applications

Syllabus and pre-reading details

The course is adapted from related courses taught at UCLA, EPFL and Imperial College London, and naturally lends itself to delivery across 10 1-hour lectures split across 5 days, structured as follows:

Foundations and Federated Learning

  • Lecture 1: Foundations This lecture reviews fundamental concepts relevant to distributed optimization and learning. We will see how statistical learning theory motivates the formulation of aggregate optimization problems, and how solutions to said problems can be pursued systematically and efficiently using stochastic gradient algorithms.
  • Lecture 2: Federated Learning We will motivate the need for learning without the exchange of raw data, and will show how federated learning architectures can achieve this goal in the presence of a fusion center. We will demonstrate the benefit of collaboration in federated learning settings, and identify key factors and performance trade-offs including the number of agents, level of heterogeneity and data quality.

Graphs and their Role in Distributed Processing

  • Lecture 3: Graph Theory Recognizing the need for decentralized learning algorithms without a fusion center, we will introduce graphs as a useful tool for modeling peer-to-peer relations among intelligent agents. We will develop relevant techniques such as properties of adjacency and Laplacian matrices, the Perron-Frobenius theorem and graph spectral theory.
  • Lecture 4: Basic Processing over Networks Armed with the tools to study networked systems, we will show how they can be leveraged to develop simple decentralized processing algorithms, such as consensus averaging and denoising algorithms. This simple setting will already allow us to demonstrate and quantify some of the key performance trade-offs in decentralized processing, such as the level of connectivity captured in the mixing rate of the graph.

Algorithms for Distributed Optimization and Learning

  • Lecture 5: Penalty-Based Algorithms for Distributed Optimization By combining optimization techniques from Day 1 with networked processing techniques from Day 2 we will develop penalty-based algorithms for distributed optimization and learning including the distributed gradient descent [4, 5], consensus + innovations [6], and diffusion algorithms [2, 7].
  • Lecture 6: Primal-Dual Algorithms for Bias-Corrected Learning We will show how Lagrangian duality theory and dynamic consensus algorithms can be exploited to derive primal-dual counterparts of the penalty-based algorithms developed in Lecture 5, including the EXTRA algorithm [8], gradient tracking [9], exact diffusion [10]. We will observe empirical differences in performance and the impact of network topology and heterogeneity

Performance Guarantees and Trade-Offs

  • Lecture 7: Analytical Performance-Guarantees Motivated by empirical observations of Day 3, we will develop convergence analysis of penalty-based and primal-dual algorithms for distributed optimization and learning, and quantify the trade-offs of various implementations
  • Lecture 8: Reconciling Analytical and Empirical Results We will apply analytical guarantees from Lecture 7 to inform the design of multiagent learning algorithms and demonstrate how they allow us to deploy improved and provably optimal constructions.

Advanced Topics and Open Problems

  • Lecture 9: Communication-constrained and variance-reduced distributed learning We will see how compression and variance reduction can further improve the communication and sample-efficiency of distributed learning architectures.
  • Lecture 10: Open problems We will highlight some open problems and future research directions in the area of distributed optimization and learning.


This short course is based on related courses taught at UCLA, EPFL and Imperial College London, the materials of which will be adapted and made available to attendees on a dedicated website. This will include lecture recordings, slides as well as lecture notes and Jupyter notebooks to support computer labs. Corresponding reference material is volume 1 of [11].

Hands-On Components

The lecture materials are accompanied by five computer laboratory assignments which attendees are invited to complete in 1-2 hours of self study each. They reproduce and illustrate the key insights from each day, are provided as Jupyter notebooks, and take the form:

  • Lab 1: The benefit of cooperation in multi-agent systems
  • Lab 2: Consensus averaging and denoising over networks
  • Lab 3: The benefit of bias-correction in distributed learning
  • Lab 4: Designing provably optimal distributed multi-agent systems
  • Lab 5: Distributed learning with PyTorch and the Fashion MNIST dataset

Attendees can choose to complete labs during the days of the short course or following completion of the course. The presenters will be available to support lab work for a period of two weeks following completion of the short course.

Target Attendees and Prerequisites

Variations of this course have been taught at the Masters and PhD level to students with varying backgrounds in optimization and learning theory, and is hence designed to be largely self-contained. The focus of lectures will be on foundational insights and their impact on practice, making them equally valuable for analytically minded researchers and practitioners looking to apply distributed techniques in their respective fields. Attendees will benefit from basic knowledge in linear algebra, stochastic processes and calculus. Basic knowledge of Python will be beneficial in completing the practical computer labs.

Course Materials

Course materials will be made available on a dedicated course website should the short course be accepted for presentation at ICASSP. Earlier versions of related courses can be found at

Presenter Biographies

Stefan Vlaski is Lecturer (Assistant Professor) in the Communications and Signal Processing Group within the Department of Electrical and Electronic Engineering at Imperial College London, where he conducts research at the intersection of machine learning, network science and optimization with applications in signal processing and communications.

Dr. Vlaski received the B.Sc. degree in Electrical Engineering from Technical University Darmstadt, Germany, in 2013, and M.S as well as Ph.D. degrees in Electrical and Computer Engineering from the University of California, Los Angeles, USA, in 2014 and 2019, respectively. From 2019 to 2021 he was Postdoctoral Researcher with the Adaptive Systems Laboratory at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland.

Dr. Vlaski has presented a tutorial at ICASSP 2022, and is teaching a module titled "Distributed Optimization and Learning" at Imperial College London.

Ali H. Sayed is Dean of Engineering at EPFL, Switzerland, where he also leads the Adaptive Systems Laboratory. He has also served as Distinguished Professor and Chairman of Electrical Engineering at UCLA. He has been recognized as a Highly Cited Researcher for several years and is a member of the US National Academy of Engineering and The World Academy of Sciences. He served as President of the IEEE Signal Processing Society during 2018 and 2019.

Dr. Sayed is an author/co-author of over 600 scholarly publications and nine books. His research involves several areas including adaptation and learning theories, data and network sciences, statistical inference, and multi-agent systems. His work has been recognized with several awards including the 2022 IEEE Fourier Technical Field Award, the 2020 Norbert Wiener Society Award, and several Best Paper awards.

Dr. Sayed has given over 200 seminars, keynote lectures, and several tutorials to date. In particular, he has delivered tutorials at IEEE ICASSP 2002, 2009, 2012, 2015, and 2022. He has taught various modules related to the proposed short course at UCLA and EPFL.