WG 1 - CT for ML

Apply to join

Overview

The advent of the control theoretic perspective on learning with deep residual neural networks has recently been shown to be not only a reliable computational tool, but also a fruitful avenue for providing original theoretical results to a multitude of problems. These include adversarial robustness, generative modelling and generalisation bounds, to name a few. Similarly, dynamic Bayesian networks, which are the central paradigm in statistical ML, can be analysed with the help of classical CT tools like Kalman filters. The aim is to explore these connections and contribute to the mathematics of reliable ML.


Tasks

  1. Obtaining error estimates for ML surrogates
  2. Theoretical guarantees for physics-informed neural networks
  3. Application of Game Theory to ML
  4. Kalman filters in statistical ML
  5. Analysis of high-dimensional deep neural networks and transformers using CT tools

Working group leaders

Leader

Photo of Leon Bungert

Leon Bungert, Prof. Dr.

leon.bungert@uni-wuerzburg.de

Julius-Maximilians-Universität Würzburg, Emil-Fischer-Straße 40, 97074 Würzburg, Germany

Co-Leader

Photo of Tatiana Valentine Guy

Tatiana Valentine Guy, Prof. Dr.

guy@utia.cas.cz

Institute of Information Theory and Automation, Pod Vodarenskou vezi 4, 182 00 Prague, Czech Republic


Working group members (119)


Related news and activities