Popular (ensemble) Kalman filter data assimilation (DA) approaches assume that the errors in both the a priori estimate of the state and the observations are Gaussian. For constrained variables, for example, sea-ice concentration or stress, such an assumption does not hold. The variational autoencoder (VAE) is a machine-learning (ML) technique that allows us to map an arbitrary distribution to/from a latent space in which the distribution is supposedly closer to a Gaussian. We propose a novel hybrid DA–ML approach in which VAEs are incorporated in the DA procedure. Specifically, we introduce a variant of the popular ensemble transform Kalman filter (ETKF) in which the analysis is applied in the latent space of a single VAE or a pair of VAEs. In twin experiments with a simple circular model, whereby the circle represents an underlying submanifold to be respected, we find that the use of a VAE ensures that a posteriori ensemble members lie close to the manifold containing the truth. Furthermore, online updating of the VAE is necessary and achievable when this manifold varies in time, that is, when it is non-stationary. We demonstrate that introducing an additional second latent space for the observational innovations improves robustness against detrimental effects of non-Gaussianity and bias in the observational errors but lessens the performance slightly if observational errors are strictly Gaussian.

The paper is an endeavour from Ivo Pasmans at Reading University, now at the ECMWF. It is entitled Ensemble Kalman filter in latent space using a variational autoencoder pair, and is published in the Quarterly Journal of the Royal Meteorologicla Society.