The goal of spoken language understanding (SLU) systems is to determine the
meaning of the input speech signal, unlike speech recognition which aims to
produce verbatim transcripts. Advances in end-to-end (E2E) speech modeling have
made it possible to train solely on semantic entities, which are far cheaper to
collect than verbatim transcripts. We focus on this set prediction problem,
where entity order is unspecified. Using two classes of E2E models, RNN
transducers and attention based encoder-decoders, we show that these models
work best when the training entity sequence is arranged in spoken order. To
improve E2E SLU models when entity spoken order is unknown, we propose a novel
data augmentation technique along with an implicit attention based alignment
method to infer the spoken order. F1 scores significantly increased by more
than 11% for RNN-T and about 2% for attention based encoder-decoder SLU models,
outperforming previously reported results.