SDS Colloquium, Speaker Professor Yang Feng

November 4, 2024 2:30, ENR2 S215

When

2:30 – 3:30 p.m., Nov. 4, 2024

Title: Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness
 

Abstract: Representation multi-task learning (MTL) and transfer learning (TL) have achieved
tremendous success in practice. However, the theoretical understanding of these methods is
still lacking. Most existing theoretical works focus on cases where all tasks share the same
representation, and claim that MTL and TL almost always improve performance. However, as
the number of tasks grows, assuming all tasks share the same representation is unrealistic.
Also, this does not always match empirical findings, which suggest that a shared representation
may not necessarily improve single-task or target-only learning performance. In this paper, we
aim to understand how to learn from tasks with similar but not exactly the same linear
representations, while dealing with outlier tasks. With a known intrinsic dimension, we propose
two algorithms that are adaptive to the similarity structure and robust to outlier tasks under
both MTL and TL settings. Our algorithms outperform single-task or target-only learning when
representations across tasks are sufficiently similar and the fraction of outlier tasks is small.
Furthermore, they always perform no worse than single-task learning or target-only learning,
even when the representations are dissimilar. We provide information-theoretic lower bounds
to show that our algorithms are nearly minimax optimal in a large regime. We also propose an
algorithm to adapt to the unknown intrinsic dimension. We conduct two simulation studies to
verify our theoretical results.

Relevant Papers: