The performance of text classification methods has improved greatly over the
last decade for text instances of less than 512 tokens. This limit has been
adopted by most state-of-the-research transformer models due to the high
computational cost of analyzing longer text instances. To mitigate this problem
and to improve classification for longer texts, researchers have sought to
resolve the underlying causes of the computational cost and have proposed
optimizations for the attention mechanism, which is the key element of every
transformer model. In our study, we are not pursuing the ultimate goal of long
text classification, i.e., the ability to analyze entire text instances at one
time while preserving high performance at a reasonable computational cost.
Instead, we propose a text truncation method called Text Guide, in which the
original text length is reduced to a predefined limit in a manner that improves
performance over naive and semi-naive approaches while preserving low
computational costs. Text Guide benefits from the concept of feature
importance, a notion from the explainable artificial intelligence domain. We
demonstrate that Text Guide can be used to improve the performance of recent
language models specifically designed for long text classification, such as
Longformer. Moreover, we discovered that parameter optimization is the key to
Text Guide performance and must be conducted before the method is deployed.
Future experiments may reveal additional benefits provided by this new method.