比利时vs摩洛哥足彩
,
university of california san diego
****************************
math 278b - mathematics of information, data, and signals seminar
yi ma
uc berkeley
deep networks from first principles
abstract:
in this talk, we offer an entirely ``white box'' interpretation of deep (convolution) networks from the perspective of data compression (and group invariance). in particular, we show how modern deep layered architectures, linear (convolution) operators and nonlinear activations, and even all parameters can be derived from the principle of maximizing rate reduction (with group invariance). all layers, operators, and parameters of the network are explicitly constructed via forward propagation, instead of learned via back propagation. all components of so-obtained network, called redunet, have precise optimization, geometric, and statistical interpretation. there are also several nice surprises from this principled approach: it reveals a fundamental tradeoff between invariance and sparsity for class separability; it reveals a fundamental connection between deep networks and fourier transform for group invariance – the computational advantage in the spectral domain (why spiking neurons?); this approach also clarifies the mathematical role of forward propagation (optimization) and backward propagation (variation). in particular, the so-obtained redunet is amenable to fine-tuning via both forward and backward (stochastic) propagation, both for optimizing the same objective. \\ \\ this is joint work with 2022年亚洲世界杯预选赛 yaodong yu, ryan chan, haozhi qi of berkeley, dr. chong you now at google research, and professor john wright of columbia university.
host: rayan saab
april 1, 2021
11:30 am
zoom link: https://msu.zoom.us/j/96421373881 (passcode: first prime number $>$ 100)
****************************