Event Series
Event Type
Seminar
Wednesday, January 15, 2020 4:30 PM
Yuhua Zhu, Stanford

Abstract: For model-free reinforcement learning, the main difficulty of stochastic Bellman residual minimization is the double sampling problem, i.e., while only one single sample for the next state is available in the model-free setting, two independent samples for the next state are required in order to perform unbiased stochastic gradient descent. We propose new algorithms for addressing this problem based on the key idea of borrowing extra randomness from the future. When the transition kernel varies slowly with respect to the state, it is shown that the training trajectory of new algorithms is close to the one of unbiased stochastic gradient descent. We apply the new algorithms to policy evaluation in both tabular and neural network settings to confirm the theoretical findings. This is a joint work with Lexing Ying.