tech,17-2-H92-1016,bq model </term> in conjunction with a <term> probabilistic LR parser </term> , and refinements made to the <term>
lr,12-4-H92-1016,bq language system </term> on the same <term> test set </term> is 49.3 % . Similar results were
lr-prod,7-5-H92-1016,bq results were also obtained on the <term> February '92 benchmark evaluation </term> . This paper describes three relatively
lr-prod,35-3-H92-1016,bq 2.5 and 1.6 , respectively , on the <term> October '91 test set </term> . The weighted error for the entire
tool,7-1-H92-1016,bq paper describes the status of the <term> MIT ATIS system </term> as of February 1992 , focusing especially
lr,26-2-H92-1016,bq </term> , and refinements made to the <term> lexicon </term> . Together with the use of a larger
tech,2-2-H92-1016,bq SUMMIT recognizer </term> . These include <term> context-dependent phonetic modelling </term> , the use of a <term> bigram language
tool,23-1-H92-1016,bq especially on the changes made to the <term> SUMMIT recognizer </term> . These include <term> context-dependent
lr,7-3-H92-1016,bq Together with the use of a larger <term> training set </term> , these modifications combined to
tech,6-4-H92-1016,bq The weighted error for the entire <term> spoken language system </term> on the same <term> test set </term> is
model,10-2-H92-1016,bq phonetic modelling </term> , the use of a <term> bigram language model </term> in conjunction with a <term> probabilistic
measure(ment),16-3-H92-1016,bq modifications combined to reduce the <term> speech recognition word and sentence error rates </term> by a factor of 2.5 and 1.6 , respectively
hide detail