I submitted a preprint on two-layer neural networks that take values in a Banach space. It extends the results of Bach (2017), E, Ma and Wu (2019), E and Wojtowytsch (2020) and others on approximation rates of infinitely wide scalar-valued two-layer neural networks and establishes Monte-Carlo rates in Bochner spaces. The most unexpected result (for me) is that in the vector-valued case continuity of such neural networks can only be established with respect to the weak$^*$ topology in the target space. This turns out to be a significant restriction for networks with the ReLU activation function.
The preprint can be found here: arxiv:2105.02095