Deep Learning with Functional Inputs

Functional data is common in many applications but has not been included into deep-learning methods naturally. This paper presents an input layer facilitating functional data naturally by integrating it into densely connected feed-forward neural networks.

Many observations are naturally described by functional data. To use functional data as input to neural networks, summary statistics or transformations into different domains are often used. Recently, [Thi23D] proposed an input layer to neural networks, which incorporates the representation of functional data as basis expansion.

Given a response variable $Y$, functional covariates $X(t)$, and non-functional covariates $Z$, the authors propose to use neurons $v_i$ in the first layer which combine the general functional linear model and the multivariate linear model.

$$ v_i = g\left( \sum^K_{k=1}\int_{\mathcal{T}} \beta_k(t)x(t) dt + \sum^J_{j=1}\omega_jz_j + b \right) $$

In the above formulation, $g(\cdot)$ is the link function, as known from generalized linear models, the first summand is the functional linear model with $K$ functional covariates, the second summand denotes the multivariate linear model, and $b$ denotes the bias.

In terms of neural networks the link function is the activation function used for the neuron. In contrast to popular formulations, this kind of neuron allows to incorporate functional data with its first summand and scalar inputs with the second summand.

The functional weights $\beta_k(t)$ are obtained by representing them as linear expansion of $M$ basis functions, where the coefficients $c_{km}$ can be learned during training.

$$ \beta_k(t) = \sum^M_{m=1}c_{km}\phi_{km} = \mathbf{c}_k^T\mathbf{\phi}_k(t) $$

The authors of [Thi23D] propose the use of functional input nodes to utilize functional data with neural networks. Each node in the first layer uses a linear basis expansion of the function observation, a multivariate linear combination of additional scaler covariates and a bias term.

In addition to the inclusion of functional data for neural networks, the functional input layer allows a meaningful interpretation of the change in the functional weights over the training process.

The paper finally compares functional neural networks (FNNs), incorporating the functional input layer into a feed-forward network, to functional linear models on real and synthetic data. With promising results.

The feed-forward model with a functional input layer is compared against several popular functional models on their regression capabilities. Each study is repeated 100 times with 300 observations. The reported relative mean prediction error is the obtained error, normalized by the error of the best performing model.