The workshop will take place on the 10th of April, 2024. It's a one-day event with talks and discussions revolving around probabilistic numerical methods. The workshop is open to all registered participants.

Confirmed Speakers

Han Cheng Lie

Universit├Ąt Potsdam ➔ Website

Frederik De Ceuster

Institute of Astronomy (KU Leuven) ➔ Website

Disha Hegde

University of Southampton

Learning to Solve Related Linear Systems
Solving multiple large linear systems defined across a parameter space is central to many numerical tasks, such as solving nonlinear PDEs arising from applications like fluid dynamics, optimising hyperparameters of Gaussian processes and finding coefficients for statistical models. The computational expense of solving these linear systems can be lowered if their interdependence across the parameter space is exploited efficiently. This talk extends the idea of probabilistic linear solvers across a space of parameters. This probabilistic solver is used as a companion with standard iterative solvers like the Conjugate Gradient method to provide an efficient initial guess and preconditioner, that accelerate the convergence.

Motonobu Kanagawa

Eurecom ➔ Website

Comparing Scale Parameter Estimators for Gaussian Process Regression: Cross Validation and Maximum Likelihood
Gaussian process (GP) regression is a Bayesian nonparametric method for regression and interpolation, offering a principled way of quantifying the uncertainties of predicted function values. For the quantified uncertainties to be well-calibrated, however, the covariance kernel of the GP prior has to be carefully selected. In this talk, we theoretically compare two methods for choosing the kernel in GP regression: cross-validation and maximum likelihood estimation. Focusing on the scale-parameter estimation of a Brownian motion kernel in the noiseless setting, we prove that cross-validation can yield asymptotically well-calibrated credible intervals for a broader class of ground-truth functions than maximum likelihood estimation, suggesting an advantage of the former over the latter. (Joint work with Masha Naslidnyk, Toni Karvonen, and Maren Mahsereci)

Katharina Ott

University of Tübingen

Paz Fink Shustin

University of Oxford ➔ Website

Scalable Gaussian Process Regression with Gauss-Legendre Features
Gaussian processes provide a powerful probabilistic kernel learning framework, facilitating high-quality nonparametric learning via methods such as Gaussian process regression. Nevertheless, its learning phase requires unrealistic massive computations for large datasets. In this talk, we present a quadrature-based approach for scaling up Gaussian process regression via a low-rank approximation of the kernel matrix. Leveraging this low-rank structure enables us to achieve effective hyperparameter learning, training, and prediction. Our method is inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration. However, our method is capable of generating high-quality kernel approximation using a number of features that is poly-logarithmic in the number of training points, while similar guarantees will require an amount that is at the very least linear in the number of training points when using random Fourier features. The utility of our method for learning with low-dimensional datasets is demonstrated through numerical experiments.