It is a challenging problem that solving the \textit{multivariate linear
model} (MLM)
Ax=b with the
ℓ1-norm
approximation method such that
∣∣Ax−b∣∣1, the
ℓ1-norm of the \textit{residual error vector} (REV), is minimized. In
this work, our contributions lie in two aspects: firstly, the equivalence
theorem for the structure of the
ℓ1-norm optimal solution to the MLM is
proposed and proved; secondly, a unified algorithmic framework for solving the
MLM with
ℓ1-norm optimization is proposed and six novel algorithms
(L1-GPRS, L1-TNIPM, L1-HP, L1-IST, L1-ADM, L1-POB) are designed. There are
three significant characteristics in the algorithms discussed: they are
implemented with simple matrix operations which do not depend on specific
optimization solvers; they are described with algorithmic pseudo-codes and
implemented with Python and Octave/MATLAB which means easy usage; and the high
accuracy and efficiency of our six new algorithms can be achieved successfully
in the scenarios with different levels of data redundancy. We hope that the
unified theoretic and algorithmic framework with source code released on GitHub
could motivate the applications of the
ℓ1-norm optimization for parameter
estimation of MLM arising in science, technology, engineering, mathematics,
economics, and so on.