比利时vs摩洛哥足彩
,
university of california san diego
****************************
278b - mathematics of information, data, and signals
hedrick assistant adjunct prof. michael murray
ucla
training shallow relu networks on noisy data using hinge loss: when do we overfit and is it benign?
abstract:
in this talk i’ll discuss recent work studying benign overfitting in two-layer relu networks trained using gradient descent and hinge loss on noisy data for binary classification. unlike logistic or exponentially tailed losses the implicit bias in this setting is poorly understood and therefore our results and techniques are distinct from other recent and concurrent works on this topic. in particular, we consider linearly separable data for which a relatively small proportion of labels are corrupted and identify conditions on the margin of the clean data which give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly. our analysis provides a fine-grained description of the dynamics of neurons throughout training and reveals two distinct phases: in the first phase clean points achieve close to zero loss, in the second phase clean points oscillate on the boundary of zero loss while corrupt points either converge towards zero loss or are eventually zeroed by the network. we prove these results using a combinatorial approach that involves bounding the number of clean versus corrupt updates across these phases of training.
host: alex cloninger
october 26, 2023
11:30 am
apm 2402
****************************