IEEE - Institute of Electrical and Electronics Engineers, Inc. - Sequence-level Knowledge Distillation for Model Compression of Attention-based Sequence-to-sequence Speech Recognition

ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Author(s): Raden Mu'az Mun'im ; Nakamasa Inoue ; Koichi Shinoda
Publisher: IEEE - Institute of Electrical and Electronics Engineers, Inc.
Publication Date: 1 May 2019
Conference Location: Brighton, United Kingdom, United Kingdom
Conference Date: 12 May 2019
Page(s): 6,151 - 6,155
ISBN (Electronic): 978-1-4799-8131-1
ISSN (Electronic): 2379-190X
DOI: 10.1109/ICASSP.2019.8683171
Regular:

We investigate the feasibility of sequence-level knowledge distillation of Sequence-to-Sequence (Seq2Seq) models for Large Vocabulary Continuous Speech Recognition (LVCSR). We first use a... View More

Advertisement