Paper

Learning the Kalman Filter with Fine-Grained Sample Complexity

paper Menu

Abstract

We develop the first end-to-end sample complexity of model-free policy gradient (PG) methods in discrete-time infinite-horizon Kalman filtering. Specifically, we introduce the receding-horizon policy gradient (RHPG-KF) framework and demonstrate sample complexity for RHPG-KF in learning a stabilizing filter that is ϵ-close to the optimal Kalman filter. Notably, the proposed RHPG-KF framework does not require the system to be open-loop stable nor assume any prior knowledge of a stabilizing filter. Our results shed light on applying model-free PG methods to control a linear dynamical system where the state measurements could be corrupted by statistical noises and other (possibly adversarial) disturbances.

Description

2023 American Control Conference (ACC), San Diego, CA, USA, 2023, pp. 4549-4554, doi: 10.23919/ACC55779.2023.10156641.

Country
USA
Affiliation
University of Illinois Urbana-Champaign
IEEE Region
Region 04 (Central U.S.)
Country
USA
Affiliation
University of Illinois Urbana-Champaign
Country
USA
Affiliation
University of Illinois, Urbana-Champaign