Tutorials

 

Henry Pfister

Boolean Functions and Error-Correcting Codes

Why Reed-Muller Codes Achieve Capacity
intro image here
Abstract

Over the past 10 years, there have been significant advances in the understanding of sequences of structured error-correcting codes and their ability to achieve capacity. Much of this work has focused on binary Reed-Muller (RM) codes and was driven by the goal of proving that they achieve capacity. For erasure channels, these results show that sequences of error-correcting codes achieve capacity under a symmetry condition that is only a bit stronger than transitivity. For example, this condition is satisfied by RM codes, Bose-Chaudhuri-Hocquenghem (BCH) codes, and multidimensional product codes. For channels with errors, such as the binary symmetric channel (BSC), it is still unknown if symmetry is sufficient to prove capacity-achieving performance. But, sequences of doubly-transitive codes with an additional nesting property have now been shown to achieve capacity. For now, however, RM codes are essentially the only codes known to satisfy these conditions (modulo a few small variations). The proofs of all these results rely crucially on the analysis of boolean functions using Fourier decomposition and, in some cases, hypercontractivity.

These lectures will present RM codes and establish many of their special properties. The Fourier analysis of boolean functions will also be introduced in detail and used to establish the results described above. There will also be a discussion of promising research directions and open problems. Time permitting, we will also make connections to past work in the theoretical computer science community on list and local decoding of RM codes.

Biography

Henry D. Pfister received his Ph.D. in Electrical Engineering in 2003 from the University of California, San Diego and is currently a professor in the Electrical and Computer Engineering Department of Duke University with a secondary appointment in Mathematics. Prior to that, he was an associate professor at Texas A&M University (2006-2014), a post-doctoral fellow at the École Polytechnique Fédérale de Lausanne (2005-2006), and a senior engineer at Qualcomm Corporate R&D in San Diego (2003-2004). His current research interests include information theory, error-correcting codes, quantum computing, and machine learning.

He received the NSF Career Award in 2008 and a Texas A&M ECE Department Outstanding Professor Award in 2010. He is a coauthor of the 2007 IEEE COMSOC best paper in Signal Processing and Coding for Data Storage, a coauthor of a 2016 Symposium on the Theory of Computing (STOC) best paper, and a recipient of the 2021 Information Theory Society Paper Award. He has served the IEEE Information Theory Society as a member of the Board of Governors (2019-2023), an Associate Editor for the IEEE Transactions on Information Theory (2013-2016), and a Distinguished Lecturer (2015-2016). He was the General Chair of the 2016 North American School of Information Theory and a Technical Program Committee Co-Chair of the 2021 International Symposium on Information Theory.

 

Cynthia Rush

Approximate Message Passing (AMP)

intro image here
Abstract

Approximate Message Passing (AMP) refers to a class of iterative algorithms that have been successfully applied to a number of high-dimensional statistical estimation problems like linear regression, generalized linear models, and low-rank matrix estimation, and a variety of engineering and computer science applications such as imaging, communications, and deep learning. AMP algorithms have two features that make them particularly attractive: they can easily be tailored to take advantage of prior information on the structure of the signal, such as sparsity, and under suitable assumptions on a design matrix, AMP theory provides precise asymptotic guarantees for statistical procedures in the high-dimensional regime. In this tutorial, I will present the main ideas of AMP from a statistical perspective to illustrate the power and flexibility of the AMP framework. We will discuss the algorithms’ use for characterizing the exact performance of a large class of statistical estimators in the high-dimensional proportional asymptotic regime and we will discuss the algorithms’ applications in wireless communication.

Biography

Cynthia Rush is the Howard Levene Associate Professor of Statistics at Columbia University. She received a Ph.D. and M.A. in Statistics from Yale University in 2016 and 2011, respectively, and she completed her undergraduate coursework at the University of North Carolina at Chapel Hill where she obtained a B.S. in Mathematics in 2010. She received a NSF CRIII award in 2019, was a finalist for the 2016 IEEE Jack K. Wolf ISIT Student Paper Award, was an NTT Research Fellow at the Simons Institute for the Theory of Computing for the program on Probability, Computation, and Geometry in High Dimensions in Fall 2020, and was a Google Research Fellow at the Simons Institute for the Theory of Computing for the program on Computational Complexity of Statistical Inference in Fall 2021. Her research focuses on message passing algorithms, statistical robustness, and applications to wireless communications.

 

Aditya Gopalan

Sequential Machine Learning:
Multi-armed Bandits & Reinforcement Learning

intro image here
Abstract

We will discuss several common sequential-decision making problems where the agent faces uncertainty about the future and must reason continuously in the face of it to make decisions in time. These problems arise within many modern data-driven intelligent systems (think Internet recommendation engines, financial portfolio allocation, resource allocation in communication systems, etc.), in which one needs the ability to adapt to changing conditions. This mini-course will expose the audience to common formulations and algorithmic techniques, while introducing tools for rigorous performance analysis along the way. We will aim to cover fundamentals of multi-armed bandits, regret bounds, PAC-learning, and reinforcement learning problems.

Biography

Aditya Gopalan is an Associate Professor at the Indian Institute of Science, Bengaluru, in the Department of Electrical Communication Engineering (ECE). He received the Ph.D. degree in electrical engineering from The University of Texas at Austin, and followed it up with a postdoc at the Technion-Israel Institute of Technology. His research interests lie in statistical learning, with a focus on online and reinforcement learning, optimization and control, and statistical inference. He served as an editor of the IEEE/ACM Transactions on Networking, and is an Associate Fellow of the Indian National Science Academy.