other,25-2-C94-1030,bq Japanese bunsetsu </term> and an <term> English word </term> , and to correct these <term> errors
tech,6-1-N03-2017,bq <term> syntax-based constraint </term> for <term> word alignment </term> , known as the <term> cohesion
model,14-4-N03-2036,bq projections </term> using an underlying <term> word alignment </term> . We show experimental
tech,4-1-H05-1012,bq This paper presents a <term> maximum entropy word alignment algorithm </term> for <term> Arabic-English
tech,6-2-C04-1192,bq method exploits recent advances in <term> word alignment </term> and <term> word clustering
tech,14-2-P05-1034,bq segmentation </term> and an <term> unsupervised word alignment component </term> . We align a <term>
tech,4-4-H05-1012,bq Significant improvement over traditional <term> word alignment techniques </term> is shown as
other,12-5-P05-1074,bq methods </term> using a set of <term> manual word alignments </term> , and contrast the <term>
measure(ment),16-3-H92-1016,bq combined to reduce the <term> speech recognition word and sentence error rates </term> by a factor
other,8-10-H01-1042,bq Additionally , they were asked to mark the <term> word </term> at which they made this decision
other,13-1-E06-1031,bq high <term> costs </term> to movements of <term> word </term> blocks . In many cases though such
tech,10-6-C90-3072,bq method has been developed for easy <term> word classification </term> . We describe the
tech,9-2-C04-1192,bq advances in <term> word alignment </term> and <term> word clustering </term> based on <term> automatic
other,11-1-P03-1051,bq </term> by a <term> model </term> that a <term> word </term> consists of a sequence of <term> morphemes
tech,6-3-E06-1018,bq approaches it utilizes <term> clustering of word co-occurrences </term> . This approach differs
other,4-8-E06-1031,bq </term> . Results from experiments with <term> word dependent substitution costs </term> will
other,31-6-A94-1007,bq analysis cost </term> , the improvement of <term> word disambiguation </term> , the interpretation
measure(ment),20-3-N03-1018,bq significantly reduce <term> character and word error rate </term> , and provide evaluation
measure(ment),14-4-H90-1060,bq recognition </term> , we achieved a 7.5 % <term> word error rate </term> on a standard <term> grammar
other,11-3-C04-1036,bq empirical quality of <term> distributional word feature vectors </term> and its impact on
hide detail