In recent years, Artificial Intelligence (AI) models have achieved remarkable
success across various domains, yet challenges persist in two critical areas:
ensuring robustness against uncertain inputs and drastically increasing model
efficiency during training and inference. Spiking Neural Networks (SNNs),
inspired by biological systems, offer a promising avenue for overcoming these
limitations. By operating in an event-driven manner, SNNs achieve low energy
consumption and can naturally implement biological methods known for their high
noise tolerance. In this work, we explore the potential of the spiking
Forward-Forward Algorithm (FFA) to address these challenges, leveraging its
representational properties for both Out-of-Distribution (OoD) detection and
interpretability. To achieve this, we exploit the sparse and highly specialized
neural latent space of FF networks to estimate the likelihood of a sample
belonging to the training distribution. Additionally, we propose a novel,
gradient-free attribution method to detect features that drive a sample away
from class distributions, addressing the challenges posed by the lack of
gradients in most visual interpretability methods for spiking models. We
evaluate our OoD detection algorithm on well-known image datasets (e.g.,
Omniglot, Not-MNIST, CIFAR10), outperforming previous methods proposed in the
recent literature for OoD detection in spiking networks. Furthermore, our
attribution method precisely identifies salient OoD features, such as artifacts
or missing regions, hence providing a visual explanatory interface for the user
to understand why unknown inputs are identified as such by the proposed method.