比利时vs摩洛哥足彩
,
university of california san diego
****************************
math 278c: optimization and data science
prof. yuhua zhu
ucsd
reinforcement learning in the optimization formulation
abstract:
there are two types of algorithms in reinforcement learning (rl): value-based and policy-based. as nonlinear function approximations, such as deep neural networks, become popular in rl, algorithmic instability is often observed in practice for both types of algorithms. one reason is that most algorithms are based on the contraction property of the bellman operator, which may no longer hold in nonlinear approximation. in this talk, we will introduce two algorithms based on the bellman residual whose performance is independent of the contraction property of the bellman operator. in both algorithms, we formulate the rl into an unconstrained optimization problem. the first algorithm is value-based, where we assume the underlying dynamics is smooth. we proposed an algorithm called borrowing from the future (bff), and we proved that it has an exponentially fast convergence rate in model-free control. the second algorithm is policy-based. we proposed an algorithm called variational actor-critic with flipping gradients. we prove that it is guaranteed to converge to the optimal policy when the state space is finite.
host: jiawang nie
november 9, 2022
3:00 pm
https://ucsd.zoom.us/j/
meeting id: 941 9922 3268
password: 278cf22
****************************