An Information-Theoretic Characterization of Rate–Distortion–Perception–Task Tradeoffs in Learning Representations
About this Event
165 SW Sackett Place, Corvallis, OR 97321
Date: Feb. 27, 2026
Time: 2 p.m.
Location: LINC 268
Speaker: Prof. Thinh Nguyen, School of EECS, Oregon State University
Abstract:
We present an information-theoretic framework for learning data representations under multiple objectives, including rate, mean-squared
distortion, perceptual quality, and task-related performance. Such multi-objective considerations arise naturally in data-driven systems, where representations must simultaneously support efficient storage or transmission, human-perceived quality, and downstream inference tasks.
The framework characterizes the fundamental tradeoffs among these criteria and investigates universal representations, asking whether a
single encoder can support multiple operating points through decoder adaptation. One-shot information-theoretic tools are then employed to
establish these tradeoffs in both finite and asymptotic regimes, linking theoretical insight with practical multi-objective representation learning settings. The framework’s utility is illustrated through theoretical examples as well as empirical evaluations on representative tasks such as in-painting, de-noising, and super-resolution, viewed through the perspective of cross-domain lossy compression and constrained optimal transport.
Bio:
Thinh Nguyen is a professor at Oregon State University. He did his Ph.D. at UC Berkeley, and has co-authored several award-winning papers at leading conferences. His research spans the probability theory and applications, from signal processing and information theory to wireless communications, networking, distributed optimization, and quantum computing. Lately, he has been dabbling in the world of AI, applying his stochastic lens to tackle problems at the intersection of AI and information theory.