- Session 4: Unsupervised, Semi-supervised Learning, Reinforcement Learning -- Day 3 (Nov.19), talks: 10:50-11:30 (5th floor Hall 2), poster session: 11:30-14:00
- Poster number: Tue22
- Download paper
Geon-Hyeong Kim (KAIST); Youngsoo Jang (KAIST); Jongmin Lee (KAIST); Wonseok Jeon (MILA, McGill University); Hongseok Yang (KAIST); Kee-Eung Kim (KAIST)
Stochastic variational inference has emerged as an effective method for performing inference on or learning complex models form data. Yet, one of the challenges in stochastic variational inference is handling high-dimensional data, such as sequential data, and models with non-differentiable densities caused by, for instance, the use of discrete latent variables. In such cases, it is challenging to control the variance of the gradient estimator used in stochastic variational inference, while low variance is often one of the key properties needed for successful inference. In this work, we present a new algorithm for stochastic variational inference of sequential models which trades off bias for variance to tackle this challenge effectively. Our algorithm is inspired by variance reduction techniques in reinforcement learning, yet it uniquely adopts their key ideas in the context of stochastic variational inference. We demonstrate the effectiveness of our approach through formal analysis and experiments on synthetic and real-world datasets.