The pruning objective has recently extended beyond accuracy and sparsity to
robustness in language models. Despite this, existing methods struggle to
enhance robustness against adversarial attacks when continually increasing
model sparsity and require a retraining process. As humans step into the era of
large language models, these issues become increasingly prominent. This paper
proposes that the robustness of language models is proportional to the extent
of pre-trained knowledge they encompass. Accordingly, we introduce a
post-training pruning strategy designed to faithfully replicate the embedding
space and feature space of dense language models, aiming to conserve more
pre-trained knowledge during the pruning process. In this setup, each layer's
reconstruction error not only originates from itself but also includes
cumulative error from preceding layers, followed by an adaptive rectification.
Compared to other state-of-art baselines, our approach demonstrates a superior
balance between accuracy, sparsity, robustness, and pruning cost with BERT on
datasets SST2, IMDB, and AGNews, marking a significant stride towards robust
pruning in language models.