October 30, 2020

Exact Solutions in Log-Concave Maximum Likelihood Estimation
Alexandros Grosdos (TU Munich)

In nonparametric statistics we no longer assume that our data belong to a specific statistical model, but we impose restrictions on the shape of the distribution. Given a sample of points, the best log-concave distribution fitting the data has a probability density function whose logarithm is piecewise-linear. Numerical techniques used so far in statistics often do not allow us to answer mathematical questions, like determining the number of regions of linearity of such a function. Using this problem as our starting point, we equip ourselves with tools from algebra, combinatorics, analysis, and numerical algebraic geometry on our quest to find and capture exact solutions in log-concave MLE.
This is joint work with Alexander Heaton, Kaie Kubjas, Olga Kuznetsova, Georgy Scholten and Miruna-Stefana Sorea.

November 6, 2020

The geometric measure of level sets of random polynomials
Federico Dalmao Artigas (Universidad de la República)

In this talk I will present some tools to deal with the geometric measure of level sets of random functions (random polynomials say). These tools include Kac-Rice formulas and Hermite expansions. Then I will sketch their use in some particular examples as counting roots of Kostlan polynomials and Kostlan-Shub-Smale polynomial systems.

November 13, 2020

Douglas-Rachford and Sudoku Puzzle: Finite Termination and Local Linear Convergence
Jingwei Liang (University of Cambridge)

In recent years, Douglas-Rachford splitting method has been shown to be very effective for solving non-convex optimization problems. In this talk, by focusing on the popular Sudoku puzzle, I will discuss the local convergence property of Douglas-Rachford splitting method when applied to solve non-convex feasibility problems: both finite termination and local linear convergence will be covered. For a generalization of the Sudoku puzzle, I will show that the local linear rate of convergence of Douglas-Rachford is the same over different puzzle sizes. While for the s-queens puzzle, finite termination always happens. This is a joint work with Robert Tovey.

November 20, 2020

Reciprocal ML-degree of Brownian Motion Tree Models
Aida Maraj (MPI MiS)

Brownian Motion Tree Models (BMTM) are multivariate Gaussian models that arise in phylogenetics when studying the evolution of species through time. They are realized by rooted directed trees. BMTM are wonderful as the space of their covariance matrices is a linear space of symmetric matrices, and the space of their concentration matrices is a toric variety. In applications, one is interested in computing the point in a model that is more probable for the observed data. The (reciprocal) Maximum Likelihood degree of the model gives an insight on the complexity of this problem. In BMTM the reciprocal ML-degree can be nicely computed from the structure of the tree. To prove this result we require help from toric geometry. This is based on joint work with T. Boege, J.I. Coons, C. Eur, and F. Röttger.

November 27, 2020

Gradient methods for sparse low-rank matrix recovery
André Uschmajew (MPI MiS)

The problem of recovering a row or column sparse low-rank matrix from linear measurements arises for instance in sparse blind deconvolution. The ideal goal is to ensure recovery using only a minimal number of measurements with respect to the joint low-rank and sparsity constraint. We consider gradient based optimization methods for this task that combine ideas of hard and soft thresholding with Riemannian optimization. This is joint work with Henrik Eisenmann, Felix Krahmer and Max Pfeffer.

December 4, 2020

Nodes on Spectrahedra
Taylor Brysiewicz (MPI MiS)

A spectrahedron in R3 is the intersection of a 3-dimensional affine linear subspace of dxd real matrices with the cone of positive-semidefinite matrices. Its algebraic boundary is a surface of degree d in C3 called a symmetroid. Generically, symmetroids have (d^3-d)/6 nodes over C and the real singularities are partitioned into those which lie on the spectrahedron and those which do not. This data serves as a coarse combinatorial description of the spectrahedron. For d=3 and 4, the possible partitions are known. In this talk, I will explain how we determined which partitions are possible for d=5. In particular, I will explain how numerical algebraic geometry and an enriched hill-climbing algorithm helped us find explicit examples of spectrahedra witnessing each partition.

December 11, 2020

Analysis of Stochastic Gradient Descent in Continuous Time
Jonas Latz (University of Cambridge)

Stochastic gradient descent is an optimisation method that combines classical gradient descent with random subsampling within the target functional. In this work, we introduce the stochastic gradient process as a continuous-time representation of stochastic gradient descent. The stochastic gradient process is a dynamical system that is coupled with a continuous-time Markov process living on a finite state space. The dynamical system - a gradient flow - represents the gradient descent part, the process on the finite state space represents the random subsampling. Processes of this type are, for instance, used to model clonal populations in fluctuating environments. After introducing it, we study theoretical properties of the stochastic gradient process. We show that it converges weakly to the gradient flow with respect to the full target function, as the learning rate approaches zero. Moreover, we give assumptions under which the stochastic gradient process is exponentially ergodic in the Wasserstein sense. We then additionally assume that the single target functions are strongly convex and the learning rate goes to zero sufficiently slowly. In this case, the process converges with exponential rate to a distribution arbitrarily close to the point mass concentrated in the global minimum of the full target function. We conclude with a discussion of discretisation strategies for the stochastic gradient process and illustrate our concepts in numerical experiments.

January 15, 2021

Jordan algebras of symmetric matrices
Arthur Bik (MPI MiS)

Both in optimization (semilinear programming) and in statistics (covariance/concentration matrices of Gaussian models), one encounters sets of symmetric matrices whose inverses satisfy linear constraints. The goal of this talk is to understand the restraints on inverses that induce the simplest sets: linear spaces. It turns out such constraints always form a Jordan algebra of symmetric matrices. We study their orbits up to congruence and the structure of the algebraic variety they form.

January 22, 2021

Normaliz – a tool for discrete convex geometry
Winfried Bruns (Universität Osnabrück)

The software Normaliz has been developed in Osnabrück since the mid nineties. It originated from the need to compute normalizations of affine monoids. Since then it has grown to a versatile tool for discrete convex geometry. Its basic objects are polyhedra defined over the rational numbers or real algebraic number fields. We will give an overview of Normaliz' scope and functionality, demonstrate its use and discuss some interesting applications.

February 5, 2021

Finite symplectic automorphism groups of supersingular K3 surfaces
Matthias Schütt (Leibniz Universität Hannover)

Automorphism groups form a classical object of study in algebraic geometry. In recent years, a special focus has been put on automorphisms of K3 surface, the most famous example being Mukai’s classification of finite symplectic automorphism groups on complex K3 surfaces. Building on work of Dolgachev-Keum, I will discuss a joint project with Hisanori Ohashi (Tokyo) extending Mukai’s results to fields positive characteristic. Notably, we will retain the close connection to the Mathieu group M23, while finding much larger groups.

February 12, 2021

Maximum Likelihood Estimation from a Tropical and a Bernstein-Sato Perspective
Anna-Laura Sattelberger (MPI MiS)

In this talk, I present how the Maximum Likelihood Estimation (MLE) problem from Statistics can be studied in terms of Tropical Geometry and Bernstein--Sato ideals from the theory of D-modules. I begin this talk with revisiting the discrete statistical experiment of flipping a biased coin twice. The analysis of this experiment suggests a link of the MLE and the Bernstein--Sato ideal of the parametrization of this statistical model. We will learn how this observation can be explained from an algebro-geometric perspective. This talk is based on joint work with Robin van der Veer (KU Leuven).