Deep Neural Networks are known to be very demanding in terms of computing and
memory requirements. Due to the ever increasing use of embedded systems and
mobile devices with a limited resource budget, designing low-complexity models
without sacrificing too much of their predictive performance gained great
importance. In this work, we investigate and compare several well-known methods
to reduce the number of parameters in neural networks. We further put these
into the context of a recent study on the effect of the Receptive Field (RF) on
a model's performance, and empirically show that we can achieve high-performing
low-complexity models by applying specific restrictions on the RFs, in
combination with parameter reduction methods. Additionally, we propose a
filter-damping technique for regularizing the RF of models, without altering
their architecture and changing their parameter counts. We will show that
incorporating this technique improves the performance in various low-complexity
settings such as pruning and decomposed convolution. Using our proposed filter
damping, we achieved the 1st rank at the DCASE-2020 Challenge in the task of
Low-Complexity Acoustic Scene Classification.