Recent work has developed optimization procedures to find token sequences,
called adversarial triggers, which can elicit unsafe responses from aligned
language models. These triggers are believed to be highly transferable, i.e., a
trigger optimized on one model can jailbreak other models. In this paper, we
concretely show that such adversarial triggers are not consistently
transferable. We extensively investigate trigger transfer amongst 13 open
models and observe poor and inconsistent transfer. Our experiments further
reveal a significant difference in robustness to adversarial triggers between
models Aligned by Preference Optimization (APO) and models Aligned by
Fine-Tuning (AFT). We find that APO models are extremely hard to jailbreak even
when the trigger is optimized directly on the model. On the other hand, while
AFT models may appear safe on the surface, exhibiting refusals to a range of
unsafe instructions, we show that they are highly susceptible to adversarial
triggers. Lastly, we observe that most triggers optimized on AFT models also
generalize to new unsafe instructions from five diverse domains, further
emphasizing their vulnerability. Overall, our work highlights the need for more
comprehensive safety evaluations for aligned language models.