The
ℓ1,∞ norm is an efficient structured projection but the complexity of the best algorithm is unfortunately
O(nmlog(nm)) for a matrix in
Rn×m. In this paper, we propose a new bi-level projection method for which we show that the time complexity for the
ℓ1,∞ norm is only
O(nm) for a matrix in
Rn×m, and
O(n+m) with full parallel power. We generalize our method to tensors and we propose a new multi-level projection, having an induced decomposition that yields a linear parallel speedup up to an exponential speedup factor, resulting in a time complexity lower-bounded by the sum of the dimensions, instead of the product of the dimensions. we provide a large base of implementation of our framework for bi-level and tri-level (matrices and tensors) for various norms and provides also the parallel implementation. Experiments show that our projection is
2 times faster than the actual fastest Euclidean algorithms while providing same accuracy and better sparsity in neural networks applications.