November 21st-25th, 2022
IM PAN, Warsaw, Poland

The workshop is devoted to the applications of tensor decompositions to statistics, optimization and machine learning. The goal is to involve both experts and young researchers, stimulating dialogue and collaborations between people with different background. We will focus on decompositions over real and complex numbers.



On Tuesday there will also be two contributed talks by

Venue and accommodation

The workshop takes place at the Institute of Mathematics of Polish Academy of Sciences (IMPAN). The address is Śniadeckich 8, 00-656 Warsaw, Poland


Lectures will take place in room number 321, on the third floor. The coffee breaks will be in room 408, on the fourth floor. Lunch is not organized, but our local participants will be happy to give you hints on some tasty options.

  Monday 21 Tuesday 22 Wednesday 23 Thursday 24 Friday 25
9:00 - 9:30
9:30 - 10:30 Portakal Améndola Tokcan Taufer Vannieuwenhoven
10:30 - 11:00 coffee break coffee break coffee break coffee break coffee break
11:00 - 12:00 Gesmundo Khouja Tanigawa Dewaele Pfeffer
12:00 - 14:00 lunch lunch group picture,
then lunch
lunch lunch
14:00 - 15:00 Discussions and working groups Grosdos
Discussions and working groups Discussions and working groups Discussions and working groups
15:00 - 15:30 coffee break coffee break coffee break coffee break coffee break
15:30 - 16:30 Discussions and working groups Discussions and working groups Discussions and working groups Discussions and working groups Discussions and working groups
18:00 - 20:00   Social dinner      


İrem Portakal - Tensors and equilibria in game theory (slides)
In game theory, n tensors with the same format are used to describe an n-player game. Studying the equilibria of such games is analogous to identifying certain points in the same tensor space. I begin by translating the traditional concepts of Nash and correlated equilibrium into the language of nonlinear algebra. Then I discuss certain situations where those concepts fall short of predicting the most beneficial outcome for all players and how this could be overcome via dependency equilibrium. This talk will feature connections to real algebraic varieties, oriented matroids and polytopes.

Fulvio Gesmundo - Optimization on tensor network varieties (slides)
Tensor network states form a variational ansatz class widely used in the study of quantum many-body systems. Geometrically, these states form an algebraic variety of tensors with rich representation theoretic structure. It is known that tensors on the "boundary" of this variety can provide more efficient representations for states of physical interest, but the pathological geometric properties of the boundary make it difficult to extend the classical optimization methods. In joint work with M. Christandl, D. Stilck-Franca and A. Werner, we introduced a new ansatz class which includes states at the boundary of the tensor network variety. I will present some of the geometric features of this class and explain how it can be used in the variational approach.

Carlos Améndola - Moment and cumulant tensor varieties of Gaussian mixtures (slides)
Gaussian mixture models are ubiquitous in statistics and data analysis, with applications ranging from speech recognition to image segmentation to recently modeling the spread of COVID-19. We investigate the problem of recovering the parameters of a Gaussian mixture from its moment or cumulant tensors. It turns out that addressing the feasibility of such approach can be translated into the study of algebraic varieties formed by the parametrized entries of the Gaussian moment tensors. For instance, identifiability of the Gaussian mixture will depend on whether the secant varieties to the moment and cumulant varieties are defective or not. This exciting connection leads to the search of analogues of the celebrated Alexander-Hirschowitz theorem on secant-defective Veronese varieties.

Rima Khouja - Tensor decomposition for learning Gaussian mixtures from moments (slides)
In data processing and machine learning, an important challenge is to recover and exploit models that can represent accurately the data. In this talk, we consider the problem of recovering Gaussian mixture models from datasets. We investigate symmetric tensor decomposition methods for tackling this problem, where the tensor is built from empirical moments of the data distribution. After reviewing Gaussian mixtures and the method of moments, we present an algorithm that implements the method of moments. We propose to use this method to initialize the EM algorithm, and we show concretely throughout synthetic and real datasets examples the good impact of this choice in comparison with other state-of-the-art approaches. This is a joint work with Bernard Mourrain and Pierre-Alexandre Mattei.

Alexandros Grosdos - Beyond covariance matrices: higher-order moment tensors of graphical models (slides)
A directed graph corresponds to a statistical model where the nodes represent random variables and arrows encode relations between them. Covariance matrices of Gaussian linear graphical models have been previously studied in algebraic statistics. Dropping Gaussianity means that the higher-order moment tensors are have a meaningful structure. In this talk we explain how to extend the combinatorial description from covariance matrices to higher-order tensors and use computational algebra to obtain a description for the third-order moment variety for trees. We also describe the polytopes coming from the moment parametrisation and explain the situation for graphs with hidden variables.

Muhammad Ardiyansyah - Dimensions of the factor analysis model and its higher order generalizations
The factor analysis model is a statistical model where a certain number of hidden random variables, called factors, affect linearly the behaviour of another set of observed random variables, with additional random noise. The main assumption of the model is that the factors and the noise are Gaussian random variables. In this talk, we do not assume that the factors and the noise are Gaussian, hence the higher order moment and cumulant tensors of the observed variables are generally nonzero. This motivates the generalized notion of kth-order factor analysis model, that is the family of all random vectors in a factor analysis model where the factors and the noise have finite and possibly nonzero moment and cumulant tensors up to order k. This subset may be described as the image of a polynomial map onto a Cartesian product of symmetric tensor spaces. We provide its dimension and conditions under which the image has positive codimension. This talk is based on joint work with Luca Sodomaco.

Neriman Tokcan - Non-negative tensor decompositions and applications in cancer genomics
The tumor microenvironment (TME) is a complex milieu around the tumor, whereby cancer cells interact with stromal, immune, vascular, and extracellular components. The TME is being increasingly recognized as a key determinant of tumor growth, disease progression, and response to therapies. A comprehensive study in this area requires the integration of multi-modal, multi-sample data from different sources. Tensor methods are known to be able to successfully incorporate data from multiple sources and perform a joint analysis of heterogeneous high-dimensional data sets. We build a generalizable and robust tensor-based framework capable of integrating dissociated single-cell and spatially resolved RNA-sequencing data for a comprehensive analysis of the TME. The methodologies developed as part of this effort will advance our understanding of the TME in multiple directions. These include cellular heterogeneity within the TME, crosstalks between cells, and tumor-intrinsic pathways stimulating tumor growth and immune evasion.

Shin-Ichi Tanigawa - Graph rigidity and identifiability of tensor completions
In this talk I will explain a connection between graph rigidity theory and tensor completions. Graph rigidity theory studies the rigidity or flexibility of bar-joint linkages in Euclidean space. A typical question in this theory is to understand a rigidity property in terms of underlying graphs or matroids. Replacing the distance measurement with an appropriate polynomial function, one can pose similar questions for the unique identifiability of tensor completions. In this talk I will present various research problems emerged through such a formulation.
The talk is based on a joint work with James Cruickshank, Fatemeh Mohammadi, Anthony Nixon, and Kota Nakagawa.

Daniele Taufer - On the regularity of natural apolar schemes (slides)
We are often interested in symbolically computing the "best" algebraic decompositions of a given symmetric tensor. In this talk, we will precisely describe this optimality condition, and we will discuss the difficulty of its computation for generic tensors. Such decompositions canonically define zero-dimensional schemes apolar to the given tensor, which are called natural apolar schemes. We will see how to practically compute such objects, and for particular families we will establish regularity conditions. Understanding their regularity is a huge open research problem, whose practical consequences for tensor decomposition algorithms will be presented in the last part of the talk.

Nick Dewaele - Condition numbers of tensor decompositions (slides)
In this talk, I will give an overview of recent results in the
perturbation theory of various tensor decompositions, including the canonical polyadic, block term, and Tucker decomposition. The results can be used to measure the robustness of multilinear models for data analysis. A classic tool from numerical analysis to measure the sensitivity of the output of any computational problem is the condition number. I will start with an exposition of what condition numbers are in general, how they can be applied to tensor decompositions, and how the condition number can be computed numerically. The talk involves joint work with Paul Breiding and Nick Vannieuwenhoven.

Nick Vannieuwenhoven - Riemannian optimization for the tensor rank decomposition
The tensor rank decomposition or canonical polyadic decomposition (CPD) is a generalization of a low-rank matrix factorization from matrices to higher-order tensors. In many applications, multi-dimensional data can be meaningfully approximated by a low-rank CPD. In this talk, I will describe a Riemannian optimization method for approximating a tensor by a low-rank CPD. This is a type of optimization method in which the domain is a smooth manifold, i.e. a curved geometric object. The presented method achieved up to two orders of magnitude improvements in execution time for challenging small-scale dense tensors when compared to state-of-the-art nonlinear least squares solvers.

Max Pfeffer - Particle number conservation and block-sparse matrix product states
Tensor network are widely used for computations in quantum chemistry. Constraining the particle number (or other quantum numbers) leads to a block-sparsity pattern in tensor components. This is exploited in many tensor network codes, in particular in DMRG algorithms. In this talk, we look at such block-sparsity properties from a more general point of view, with potential applications in other contexts. We then consider the interaction of the block structure with matrix product operator representations of Hamiltonians in quantum chemistry. We obtain explicit representations of such Hamiltonians operating directly on the block structures.

This is joint work with Markus Bachmayr and Michael Götte.

Registration and financial support

To attend, please fill in the AGATES registration form, indicating that you plan to attend the workshop in the section "Interest in events". While you register you can apply for financial support and accommodation. You will also be asked to attach a short research statement.

Institutions involved

The workshop is supported by

The workshop is a part of a semester long program Algebraic Geometry with Applications to TEnsors and Secants (AGATES), which is supported by the Simons Foundation and by Institute of Mathematics of Polish Academy of Sciences.