WG 2 - ML for CT

Apply to join

Overview

This working group addresses the application of ML tools, such as kernel methods and deep neural networks to complex models in control theory. These will be combined with traditional control methods, either on the algorithmic level for developing enhanced and provably efficient novel techniques, or for analysis purposes, in order to better understand the opportunities and limitations of ML for the control design. The focus will be on the development of techniques that can handle high-dimensional problems and face the curse of dimensionality.


Tasks

  1. Addressing the curse of dimensionality with ML tools
  2. Solving parameterised optimal control problems
  3. Construction of control Lyapunov functions using ML methods
  4. Developing ML-based approaches for the for the life-cycle-optimisation in materials
  5. Exploiting PINNs for solving complex free boundary problems

Open problems (sorted by topic)

Addressing the curse of dimensionality with ML tools

Title and details Contact person
Learn set-valued maps related to control problems with machine learning tools
Required skills: Good command of Python and a basic knowledge of control theory
Francisco Periago
Regularity theory for PDEs in high dimensions
Required skills: Good command of functional analysis and PDEs
Francisco Periago

Solving parameterised optimal control problems

Title and details Contact person
Nonlinear and transport-dominated problems
Goal: Solve optimal control problems where the governing dynamics are parametric and nonlinear or transport-dominated, for instance $$\frac{\partial y}{\partial t}+\mu\frac{\partial y^2}{\partial x}=u.$$
Required skills: Good command in numerics of PDEs and in control theory
Details:
  • Nonlinear problems pose difficulties for traditional, linear approximation schemes.
  • General question for model order reduction.
  • What is possible in the context of (optimal) control of such systems?
  • Where can machine learning help to overcome these issues (nonlinear strategies such as autoencoders, etc.)?
Martin Lazar
Control of flow problems such as the Navier-Stokes equations
Goal: Solve optimal control problems for flows governed by parametric Navier-Stokes equations, for instance $$\begin{aligned}\frac{\partial y}{\partial t} - \mu\Delta y + (y\cdot\nabla)y + \nabla p &= u,\\\operatorname{div} y &= 0.\end{aligned}$$
Required skills: Good command in numerics of PDEs and in control theory
Details:
  • Navier-Stokes equations are an important model for (viscous) flow in real-world applications.
  • Depending on the Reynolds number (roughly the parameter $\mu$ in the formulation above), the solution behavior can change completely.
  • Can machine learning help in order to deal with the turbulent regime?
Maria Strazzullo
General convex objective functionals
Goal: Induce sparsity in the control by solving a problem of the form $$u^*_\mu = \arg\min\limits_{u\in G}\ \lVert u\rVert_{L^1([0,T];U)}+\frac{\alpha}{2}\lVert u\rVert_{L^2([0,T];U)}^2 + h(x_\mu).$$
Required skills: Good command in (convex) optimization; Python programming
Details:
  • Which algorithm is suited best to solve this OCP?
  • How to deal with the parameter dependence?
  • Can reduced order modeling be applied here in a suitable manner?
  • If so, how to combine it in a reasonable way with machine learning?
Cesare Molinari
Optimization in the parameter space
Goal: Solve problems of the form $$\mu^* = \arg\min\limits_{\mu\in\mathcal{P}}\ F(\mu;u_\mu)$$ where $u_\mu\in G$ solves an optimal control problem for the parameter $\mu\in\mathcal{P}$.
Required skills: Knowledge in optimization and control theory; Python programming
Details:
  • Optimal control problem for a fixed parameter as an "inner" problem.
  • Derivatives with respect to the parameter are typically required.
  • Optimizer usually moves outside of the range of training data points. $\longrightarrow$ How to extrapolate properly?
Hendrik Kleikamp
Small data regime
Goal: How to deal with relatively small amount of available data?
Required skills: Good command of machine learning and control theory
Details:
  • Training data (at least using the FOM) is costly to obtain.
  • Which quantities are easiest to learn when only a small amount of data is available?
    • Optimal control $\longrightarrow$ How to obtain performance guarantees?
    • Reduced quantities $\longrightarrow$ Combination with MOR techniques often allows to collect more training data and to make use of their error estimates.
    • Open loop vs. closed loop systems $\longrightarrow$ Feedback control requires different architectures and learning techniques.
Hendrik Kleikamp
Applications in uncertainty quantification
Goal: Make use of the derived surrogates in multilevel Monte Carlo methods: $$\mathbb{E}[M_L] = \mathbb{E}[M_0] + \sum\limits_{\ell=0}^{L} \mathbb{E}[M_\ell-M_{\ell-1}].$$
Required skills: Machine learning and surrogate modeling; a bit of probability theory and statistics; Python programming
Details:
  • Consider different applications in which we want efficient estimates of unknown quantities.
  • Interactions of the different models?
  • Strategies to select the models and the number of evaluations on different levels?
  • Can we derive probabilistic guarantees that this works?
Hendrik Kleikamp

Construction of control Lyapunov functions using ML methods

Title and details Contact person
Fast and reliable learning algorithms (in particular for nonsmooth functions)
Lars Grüne
Efficient verification of a control Lyapunov function candidate
Lars Grüne

Developing ML-based approaches for the life-cycle-optimisation in materials

Title and details Contact person
Approximating solutions to the elasticity system
Goal: Develop the concept of approximating solutions to the elasticity system with damage evolution.
Peter Kogut
Existence and uniqueness of weak solutions I
Goal: Establish the existence and uniqueness of the weak solutions via approximation for the $L^1$-damage source function $$\phi(\mathbf{e}(\mathbf{u}),\zeta) = -\lambda_D\left(\frac{1-\zeta}{\zeta}\right)-\frac{1}{2}\lambda_u\mathbf{e}(\mathbf{u})\cdot\mathbf{e}(\mathbf{u})+\lambda_w.$$
Peter Kogut
Existence and uniqueness of weak solutions II
Goal: Study the existence of weak solutions to the original problem using the following relaxed version $$-\operatorname{div}(\zeta A\mathbf{e}(\mathbf{u}))+\varepsilon\mathbf{u}=\mathbf{f}\qquad\text{in }\Omega_T.$$
Peter Kogut
Investigating the strain tensor
Goal: Find out whether the strain tensor $\mathbf{e}(\mathbf{u})=\{\mathbf{e}_{ij}/\mathbf{u}\}$ with $$\mathbf{e}_{ij}(\mathbf{u}) = \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i}\right),\qquad\forall i,j=1,\dots,N$$ possesses the high integrability property, $|\mathbf{e}(\mathbf{u})|\in L^{2(1+\delta)}$ for some $\delta>0$.
Peter Kogut
Existence of an optimal control
Goal: Establish the existence of an optimal control provided the damage source function takes the form $$\phi(\mathbf{e}(\mathbf{u}),\zeta) = -\lambda_D\left(\frac{1-\zeta}{\zeta}\right)-\frac{1}{2}\lambda_u\mathbf{e}(\mathbf{u})\cdot\mathbf{e}(\mathbf{u})+\lambda_w.$$
Peter Kogut
Controls in the limit of vanishing smoothing
Goal: Find out whether sustainable controls can be attained in the limit as $\varepsilon\to0$ using the following relaxed version of the first equation $$-\operatorname{div}((\zeta)_\varepsilon A\mathbf{e}(\mathbf{u}))=\mathbf{f}\qquad\text{in }\Omega_T,$$ where $(\cdot)_\varepsilon$ stands for the Steklov smoothing operator.
Peter Kogut
Existence of a control
Goal: It is unknown whether there exists a control $f\in\mathcal{F}_{ad}$ such that the corresponding solutions $(\zeta,\mathbf{u})$ satisfy the equations $$\begin{aligned}-\operatorname{div}(\zeta A\mathbf{e}(\mathbf{u})) &= \mathbf{f},\\\zeta'-\kappa\Delta\zeta &= \phi(\mathbf{e}(\mathbf{u}),\zeta)\end{aligned}$$ in the sense of $L^2(Q_T)$.
Peter Kogut

Exploiting PINNs for solving complex free boundary problems

Title and details Contact person
Error estimates for PINNs
Goal: Find error estimates for PINNs for more complex problems, elliptic problems like for instance Bernoulli, or even evolutionary PDEs.
Cristina Trombetti
PINNs containing domain information
Goal: Work out a new type of PINNs where the neural network provides not only the function (solution to the PDE) but also information on the domain.
Cristina Trombetti

Working group leaders

Leader

Photo of Hendrik Kleikamp

Hendrik Kleikamp, Dr.

hendrik.kleikamp@uni-graz.at

University of Graz, Leechgasse 34, 8010 Graz, Austria

Co-Leader

Photo of Francisco Periago

Francisco Periago, Prof. Dr.

f.periago@upct.es

Universidad Politecnica De Cartagena, Plaza Del Cronista Isidoro Valverde Edificio La Milagrosa, 30202 Cartagena, Spain


Working group members (136)


Related news and activities