9:30 – 9:55 Zach Charles (ECE):

Stability and Generalization of Convergent Learning Algorithms

In machine learning we are often interested in how well a learning algorithm generalizes to new data. To quantify the generalization error, we will use the notion of stability. Roughly speaking, a learning algorithm is stable if small changes in the training data do not lead to large changes in the output model. By studying the stability of learning algorithms, we will derive generalization bounds that depend only on the convergence of the algorithm and the geometry of the empirical risk function. This will hold for non-convex risk functions satisfying the Polyak-Lojasiewicz (PL) inequality. While this condition has a rich history in optimization, we will further motivate it in machine learning settings by showing that it arises in neural networks with linear activations.