AAA in-person Meeting, July 1, 2022


Learning variational models with unrolling and bilevel optimization
Niklas Breustedt (TU Braunschweig)

In image and signal processing and beyond, quantities of interest are often reconstructed from potentially noisy observations by means of suitably parameterized energy minimization problems. An unknown ground truth is then approximately recovered in terms of an optimization problem where the energy to be minimized depends on an observation and parameters. Data driven approaches are commonly used in the case where choosing an appropriate energy is not obvious or infeasible and thus, the parameters or parts of them shall be learned from data. This gives rise to bilevel optimization problems in which the original convex problem (hereafter referred to as the lower level problem) appears as a constraint. An approach to avoid solving the lower level problem explicitly is the so-called unrolling. Thereby, one replaces the optimizer with an iterate of an algorithm known to be capable of solving the problem. In this talk we consider the approach to unroll a fixed number of iterations of gradient descent to the lower level problem of a tractable toy model. We compare this approach to solving the lower level problem explicitly by investigating the expressivity and computing the corresponding risks in a fixed framework.


A nonlinear randomized Kaczmarz method with Bregman projections
Maximilian Winkler (TU Braunschweig)

We propose a stochastic Newton method for systems of nonlinear equations f_i(x)=0, which can find sparse solutions or solutions under certain simple constraints. The scheme arises by computing Bregman projections onto the solution space of one Newton equation with respect to a fixed convex function whose effective domain lies within the feasible set. Recently, this method has been analyzed by different publications for the special case of euclidean projections, running under the name nonlinear Kaczmarz method. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem which involves the mirror map needs to be solved in each iteration. This can typically be done with globalized Newton iterations. If the functions f_i are nonnegative, we are in the setting of optimization under the interpolation assumption and we identify our method as stochastic mirror descent with a novel adaptive stepsize. Here we can compare with a recent work by Loizou et al (2021), which investigates stochastic mirror descent with a stochastic Polyak stepsize under interpolation. To the best of our knowledge, our scheme is the first mirror descent method which is invariant under scalar multiplication of the mirror map. We present theorems on convergence under two different assumptions, one based on the tangential cone condition, which is frequently imposed on nonlinear inverse problems, and one which fits to the setting of convex optimization under the interpolation assumption. We illustrate the numerical effectiveness with examples and compare to methods based on euclidean projections.


Graph and distributed extensions of the Douglas-Rachford method
Emanuele Naldi (TU Braunschweig)

We propose several generalizations of the Douglas-Rachford method to solve monotone inclusion problems involving N maximal monotone operators. In particular, our objective is to show that, starting from a directed connected graph with a topological ordering, it is possible to construct a generalization of the Douglas-Rachford algorithm with a communication structure that respects the connections of such a graph. We describe our methods in the (degenerate) Preconditioned Proximal Point framework and, in this way, we are able to prove convergence of every derived algorithm in a unifying way. Specifically, we show that the proposed methods are instances of the proximal point algorithm. We further describe how the graph extension of the DRS method can be leveraged to design new fully distributed protocols. Applications to a congested optimal transport problem and to distributed Support Vector Machines show interesting connections with the underlying graph topology and competitive performances with state-of-art distributed optimization approaches.


Object Detection for nanoscale images
Mandy Stritzke (TU Braunschweig)

There is only little knowledge of how molecules behave on different surfaces and temperatures. Some of them form periodic and symmetric patterns which we want to detect and classify with the use of machine learning. In this talk we will provide an overview of state-of-the-art models for object detection and which special challenges arise in the context of nanoscale images. Since the generation and labeling of sufficiently many microscopy images is time-consuming, we use synthetic images to train our model and show that the transfer to real images is successful.


Sparse signals on graphs
Tarek Emmrich (Universität Osnabrück)

Signals with a sparse representation in a given basis as well as Laplacian eigenvectors of graphs play a big role in signal processing and machine learning. We put these topics together and look at signals on graphs that have a sparse representation in the basis of eigenvectors of the Laplacian matrix. We give explicit algorithms to recover those sums by sampling the signal only on few vertices, i.e. the number of required samples is independent on the total size of the graph and takes only local properties of the graph into account.


Gordan's lemma up to symmetry
Dinh Le Van (Universität Osnabrück)

Gordan's lemma is a classical and fundamenral result in polyhedral geometry, stating that the lattice points in a rational finitely generated cone form an affine monoid. In this talk, I will present an extension of this result to infinite dimensional spaces, in which cones and monoids under consideration are invariant with respect to actions of symmetric groups. This is part of our project that aims at extending classical results in polyhedral geometry to the equivariant setting. The talk is based on joint work with Thomas Kahle and Tim Römer.


On polyhedral invariants of polynomial maps on the plane
Boulos El Hilany (TU Braunschweig)

The bifurcation set of a polynomial map ℂn → ℂn is the smallest set of points in ℂn outside of which the map has a fixed number of points in its preimage. Standard methods for characterizing the bifurcation set rely on elimination techniques that can often be inefficient. I will outline the importance of describing the topology of polynomial maps and the challenges this study carries. I will then present a new, more efficient, approach to compute some discrete invariants of some general families of polynomial maps on the plane, and to approximate their bifurcation set.