Here is a brief overview of my current research and key publications on each topic. Whilst I do my best to keep this page up to date, the best way to see what I am doing at the moment is by checking my Google Scholar page.

Functional Analysis of Neural Networks
Universal approximation properties of various types of neural networks have been known since the late 1980’s. However, it has also been shown that the approximation rates in terms of the number of neurons scale exponentially with the dimension of the input space. However, certain types of functions can be approximated with dimension-independent Monte-Carlo rates. The functional-analytic study of the spaces of such functions has recently become an active area of research.

When neural networks are used in inherently infinite-dimensional applications such as inverse problems and imaging, they need to be considered as nonlinear operators between infinite-dimensional spaces, rather than functions between Euclidean spaces (even if these are high-dimensional). The generalisation from high but finite dimension to infinite dimension is far from being trivial and requires advanced functional-analytic techniques.

The goal of this project is to advance the understanding of neural networks in the infinite-dimensional setting and to use this understanding to construct more stable and efficient numerical algorithms.

  • Korolev, Y. (2022). Two-layer neural networks with values in a Banach space. SIAM Journal on Mathematical Analysis, 54(6), 6358–6389.

L-infinity variational problems
I am interested in minimisers of Rayleigh quotients involving $L^\infty$ type norms such as the $W^{1,\infty}$ Sobolev norm:

\begin{equation} \label{eq:quotient} \min_u \frac{\|u\|_{W^{1,\infty}}}{\|u\|_{L^p}}, \end{equation}

where $p \leq \infty$. For $p=\infty,$ many global minimisers exist: ground states of the $\infty$‑Laplacian $\Delta_\infty$ (solutions of $\min(|\nabla u|- \lambda u,-\Delta_\infty u)=0$ with an appropriate constant $\lambda$), $\infty$‑harmonic functions (solutions of $\Delta_\infty u = 0$) and distance functions are all minimisers of \eqref{eq:quotient} for $p=\infty$. We show that for any $p<\infty$ the distance function is the only global minimiser and the only positive local minimiser, which gives us an efficient numerical algorithm for computing the distance function using a gradient flow with guaranteed convergence from any positive initialisation.

  • Bungert, L., and Korolev, Y. (2022). Eigenvalue Problems in L^∞: Optimality Conditions, Duality, and Relations with Optimal Transport. Communications of the American Mathematical Society, 2, 345–373.
  • Bungert, L., Korolev, Y., and Burger, M. (2020). Structural analysis of an L-infinity variational problem and relations to distance functions. Pure and Applied Analysis, 2(3), 703–738.

Data driven regularisation theory
Neural networks become increasingly popular tools in imaging, but, despite their practical success, their stability is not well understood yet. This is especially apparent in inverse problems such as image reconstruction, which have an inherent instabilty of their own (such problems are called ill-posed). Inverse problems are usually written as operator equations

\begin{equation}\label{Ax=y} Ax=y, \end{equation}

where $x$ is the quantity of interest (e.g., brain image), $y$ is the data measured by a scanner (e.g., X-ray data from a CT scanner) and the operator $A$ is a model that describes the physics of the measurement device. In many inverse problems, the inverse of $A$ is unbounded, which causes measurement errors in $y$ to be amplified. Regularisation theory studies such problems in an infinite-dimensional setting and can be used as a tool for developing and analysing algorithms with discretisation-independent stability and convergence guarantees. It relies on our ability to evaluate $A$ and predict the data $y$ for any input $x$. Neural networks, however, often don’t have numerical access to the model but rely only on input-output via training samples, which may be supported on a low-dimensional manifold in the ambient space. To analyse regularisation properties of such algorithms, one needs to extend regularisation theory to the setting when numerical access to $A$ is lacking and only input-output training pairs are available. In the paper below we show that some classical regularisation methods can indeed be extended to this model-free setting and demonstrate how similarity of the training pairs to the unknown solution influences convergence.
I am grateful to the EPSRC, the Cantab Capital Institute for the Mathematics of Information and the National Physical Laboratory who support this research.

  • Aspri, A., Korolev, Y., and Scherzer, O. (2020). Data driven regularisation by projection. Inverse Problems, 36(12), 125009.

Image reconstruction in light-sheet microscopy

We study the problem of deconvolution for light-sheet microscopy, where the data is corrupted by spatially varying blur and a combination of Poisson and Gaussian noise. The spatial variation of the point spread function (PSF) of a light-sheet microscope is determined by the interaction between the excitation sheet and the detection objective PSF. Our work includes forward modelling, modelling of mixed noise, and development and analysis of numerical reconstruction algorithms.

  • Toader, B., Boulanger, J., Korolev, Y., Lenz, M. O., Manton, J., Schönlieb, C.-B., and Mureşan, L. (2022). Image Reconstruction in Light-Sheet Microscopy: Spatially Varying Deconvolution and Mixed Noise. Journal of Mathematical Imaging and Vision, 64(9), 968–992. https://doi.org/10.1007/s10851-022-01100-3

Inverse problems with operator errors
It can also happen that the exact operator $A$ is not available, but we have an approximation whose error we can characterise somehow. I am interested in the case when both the space of unknowns $X$ and the space of mesurements $Y$ are equipped with a partial order relation that allows one to compare elements of the space similarly to how we would compare vectors elementwise (for $x,y \in \mathbb R^n$ we say that $x \geq y$ if $x_i \geq y_i$ for all $i =1,…n$). Partial order can also be introduced in the space of Radon measures $\mathcal M(\Omega)$ ($\mu_1 \geq \mu_2$ if $\mu_1(A) \geq \mu_2(A)$ for all $A \subset \Omega$), in Lebesgue spaces ($f \geq g$ if $f(x) \geq g(x)$ a.e.) and in the space of continuous functions ($f \geq g$ if $f(x) \geq g(x)$ for all $x$). In one dimension, it can be i ntroduced for functions of bounded variation $BV([a,b])$ as follows: $f(x) \geq g(x)$ if $f-g$ is non-decreasing. (Compare this with the partial order for Radon measures applied to the distributional gradient of a $BV$ function). Partial order for linear operators is induced by partial orders in the underlying spaces: for $A,B \colon X \to Y$ we say that $A \geq B$ if for any $x \geq_X 0$ it holds that $Ax \geq_Y Bx$.
Sometimes it is possible to derive lower and upper bounds in this partial order for the unknown operator $A$ in \eqref{Ax=y}. For example, if $A$ is an integral operator with kernel $K(\cdot,\cdot)$, then pointwise bounds on the kernel induce lower and upper bounds for the integral operator in the above sense. This can be used, for example, in microscopy when the point-spead function of the microscope (that is, its responce to a $\delta$-function) is known up to a pointwise error. In my previous work I studied regularisation methods for problems with operator errors that can be bounded in the sense of such a partial order.
I am grateful to the Alexander-von-Humboldt Foundation and the Royal Society who supported this research.

  • L. Bungert, Y. Korolev, M. Burger, C. Schönlieb (2020). Variational regularisation for inverse problems with imperfect forward operators and general noise models. Inverse Problems 36 (12) 125014
    DOI: 10.1088/1361-6420/abc531      arxiv: 2005.14131
  • M. Burger, Y. Korolev, J. Rasch (2019). Convergence rates and structure of solutions of inverse problems with imperfect forward models. Inverse Problems 35 (2) 024006.
    DOI: 10.1088/1361-6420/aaf6f5      arxiv: 1806.10038
  • Y. Korolev, J. Lellmann (2018). Image reconstruction with imperfect forward models and applications in deblurring. SIAM Journal on Imaging Sciences 11(1), 197–218.
    DOI: 10.1137/17M1141965      arxiv: 1708.01244
  • A. Gorokh, Y. Korolev, T. Valkonen (2016). Diffusion tensor imaging with deterministic error bounds. Journal of Mathematical Imaging and Vision 56, 137–157.
    DOI: 10.1007/s10851-016-0639-7      arxiv: 1509.02223