Use this resource - and many more! - in your textbook!
AcademicPub holds over eight million pieces of educational content for you to mix-and-match your way.
Multi-scale-audio indexing for translingual spoken document retrieval
By: Schone, P.; Meng, H.; Hsin-Min Wang; Wai-Kit Lo; Chen, B.;
2001 / IEEE / 0-7803-7041-4
This item was taken from the IEEE Conference ' Multi-scale-audio indexing for translingual spoken document retrieval ' MEI (Mandarin-English Information) is an English-Chinese crosslingual spoken document retrieval (CL-SDR) system developed during the Johns Hopkins University Summer Workshop 2000. We integrate speech recognition, machine translation, and information retrieval technologies to perform CL-SDR. MEI advocates a multi-scale paradigm, where both Chinese words and subwords (characters and syllables) are used in retrieval. The use of subword units can complement the word unit in handling the problems of Chinese word tokenization ambiguity, Chinese homophone ambiguity, and out-of-vocabulary words in audio indexing. This paper focuses on multi-scale audio indexing in MEI. Experiments are based on the Topic Detection and Tracking Corpora (TDT-2 and TDT-3), where we indexed Voice of America Mandarin news broadcasts by speech recognition on both the word and subword scales. We discuss the development of the MEI syllable recognizer, the representations of spoken documents using overlapping subword n-grams and lattice structures. Results show that augmenting words with subwords is beneficial to CL-SDR performance.
Multi-scale Audio Indexing
Mandarin-english Information System
Johns Hopkins University
English-chinese Crosslingual Spoken Document Retrieval
Chinese Word Token Ambiguity
Chinese Homophone Ambiguity
Topic Detection And Tracking Corpora
Voice Of America
Mandarin News Broadcasts
Spoken Document Representation
Systems Engineering And Theory
Audio Signal Processing