An increasing number of emerging applications in data science and engineering
are based on multidimensional and structurally rich data. The irregularities,
however, of high-dimensional data often compromise the effectiveness of
standard machine learning algorithms. We hereby propose the Rank-R Feedforward
Neural Network (FNN), a tensor-based nonlinear learning model that imposes
Canonical/Polyadic decomposition on its parameters, thereby offering two core
advantages compared to typical machine learning methods. First, it handles
inputs as multilinear arrays, bypassing the need for vectorization, and can
thus fully exploit the structural information along every data dimension.
Moreover, the number of the model's trainable parameters is substantially
reduced, making it very efficient for small sample setting problems. We
establish the universal approximation and learnability properties of Rank-R
FNN, and we validate its performance on real-world hyperspectral datasets.
Experimental evaluations show that Rank-R FNN is a computationally inexpensive
alternative of ordinary FNN that achieves state-of-the-art performance on
higher-order tensor data.