Many observations are naturally described by functional data. To use functional data as input to neural networks, summary statistics or transformations into different domains are often used. Recently, [Thi23D] proposed an input layer to neural networks, which incorporates the representation of functional data as basis expansion.
Given a response variable $Y$, functional covariates $X(t)$, and non-functional covariates $Z$, the authors propose to use neurons $v_i$ in the first layer which combine the general functional linear model and the multivariate linear model.
$$ v_i = g\left( \sum^K_{k=1}\int_{\mathcal{T}} \beta_k(t)x(t) dt + \sum^J_{j=1}\omega_jz_j + b \right) $$
In the above formulation, $g(\cdot)$ is the link function, as known from generalized linear models, the first summand is the functional linear model with $K$ functional covariates, the second summand denotes the multivariate linear model, and $b$ denotes the bias.
In terms of neural networks the link function is the activation function used for the neuron. In contrast to popular formulations, this kind of neuron allows to incorporate functional data with its first summand and scalar inputs with the second summand.
The functional weights $\beta_k(t)$ are obtained by representing them as linear expansion of $M$ basis functions, where the coefficients $c_{km}$ can be learned during training.
$$ \beta_k(t) = \sum^M_{m=1}c_{km}\phi_{km} = \mathbf{c}_k^T\mathbf{\phi}_k(t) $$
In addition to the inclusion of functional data for neural networks, the functional input layer allows a meaningful interpretation of the change in the functional weights over the training process.
The paper finally compares functional neural networks (FNNs), incorporating the functional input layer into a feed-forward network, to functional linear models on real and synthetic data. With promising results.