Accepted Paper: Towards Governing Agent’s Efficacy:Action-Conditional β-VAE for Deep Transparent Reinforcement Learning

Back to list of accepted papers

Authors

John Yang (Seoul National University); Gyuejeong Lee (Seoul National University); Simyung Chang (Seoul National University); Nojun Kwak (Seoul National University)

Abstract

We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way. Such learning approach is risky when the interacting environment includes an expanse of state space because it is then almost impossible to foresee all unwanted outcomes and penalize them with negative rewards beforehand. Unlike reverse analysis of learned neural features from previous works, our pro-posed method tackles the blackbox issue by encouraging an RL policy network to learn interpretable latent features through an implementation of a disentangled representation learning method. Toward this end, our method allows an RL agent to understand self-efficacy by distinguishing its influences from uncontrollable environmental factors, which closely resembles the way humans understand their scenes. Our experimental results show that the learned latent factors not only are interpretable, but also enable modeling the distribution of entire visited state space with a specific action condition. We have experimented that this characteristic of the proposed structure can lead to ex post facto governance for desired behaviors of RL agents.