Scalable gradient matching based on state space Gaussian Processes

Futoshi Futami (NTT)*
PMLR Page

Abstract

In many scientific fields, various phenomena are modeled by ordinary differential equations (ODEs). Parameters in ODEs are generally unknown and hard to measure directly. Since analytical solutions for ODEs can rarely be obtained, statistical methods are often used to infer parameters from experimental observations. Among many existing methods, Gaussian process-based gradient matching has been explored extensively. However, the existing method cannot be scaled to a massive dataset. Given $N$ data points, existing algorithms show $\mathcal{O}(N^3)$ computational cost. In this paper, we propose a novel algorithm using the state space reformulation of Gaussian processes. More specifically, we reformulate Gaussian process gradient matching as a special state-space model problem, then approximate its posterior distribution by a novel Rao-Blackwellization filtering, which enjoys $\mathcal{O}(N)$ computational cost. Moreover, our algorithm is expressed as closed forms, it is 1000 times more faster than existing methods measured in wall clock time.