other,16-3-H01-1042,bq This , the first experiment in a series of experiments , looks at the <term> intelligibility </term> of <term> MT output </term> .
other,11-3-I05-6011,bq This <term> referential information </term> is vital for resolving <term> zero pronouns </term> and improving <term> machine translation outputs </term> .
other,30-2-H05-2007,bq We incorporate this analysis into a <term> diagnostic tool </term> intended for <term> developers </term> of <term> machine translation systems </term> , and demonstrate how our application can be used by <term> developers </term> to explore <term> patterns </term> in <term> machine translation output </term> .
This paper presents a new <term> interactive disambiguation scheme </term> based on the <term> paraphrasing </term> of a <term> parser </term> 's multiple output .
other,16-6-H01-1042,bq We tested this to see if similar criteria could be elicited from duplicating the experiment using <term> machine translation output </term> .
other,11-8-H01-1042,bq Some of the extracts were <term> expert human translations </term> , others were <term> machine translation outputs </term> .
other,38-4-I05-2014,bq The use of <term> BLEU </term> at the <term> character </term> level eliminates the <term> word segmentation problem </term> : it makes it possible to directly compare commercial <term> systems </term> outputting <term> unsegmented texts </term> with , for instance , <term> statistical MT systems </term> which usually segment their <term> outputs </term> .
other,9-4-E06-1035,bq We then explore the impact on <term> performance </term> of using <term> ASR output </term> as opposed to <term> human transcription </term> .
The <term> non-deterministic parsing choices </term> of the <term> main parser </term> for a <term> language L </term> are directed by a <term> guide </term> which uses the <term> shared derivation forest </term> output by a prior <term> RCL parser </term> for a suitable <term> superset of L.
other,15-5-N03-1026,bq Overall <term> summarization </term> quality of the proposed <term> system </term> is state-of-the-art , with guaranteed <term> grammaticality </term> of the <term> system output </term> due to the use of a <term> constraint-based parser/generator </term> .
In this paper we show how two standard outputs from <term> information extraction ( IE ) systems </term> - <term> named entity annotations </term> and <term> scenario templates </term> - can be used to enhance access to <term> text collections </term> via a standard <term> text browser </term> .
other,9-6-E06-1035,bq We also find that the <term> transcription errors </term> inevitable in <term> ASR output </term> have a negative impact on models that combine <term> lexical-cohesion and conversational features </term> , but do not change the general preference of approach for the two tasks .
measure(ment),6-3-H05-1117,bq The lack of automatic <term> methods </term> for <term> scoring system output </term> is an impediment to progress in the field , which we address with this work .
This article considers approaches which rerank the output of an existing <term> probabilistic parser </term> .
other,38-1-N03-1018,bq In this paper , we introduce a <term> generative probabilistic optical character recognition ( OCR ) model </term> that describes an end-to-end process in the <term> noisy channel framework </term> , progressing from generation of <term> true text </term> through its transformation into the <term> noisy output </term> of an <term> OCR system </term> .
other,16-2-N03-1018,bq The <term> model </term> is designed for use in <term> error correction </term> , with a focus on <term> post-processing </term> the <term> output </term> of black-box <term> OCR systems </term> in order to make it more useful for <term> NLP tasks </term> .
other,28-1-H01-1042,bq The purpose of this research is to test the efficacy of applying <term> automated evaluation techniques </term> , originally devised for the <term> evaluation </term> of <term> human language learners </term> , to the <term> output </term> of <term> machine translation ( MT ) systems </term> .
The scheme was implemented by gathering <term> statistics </term> on the output of other <term> linguistic tools </term> .
tech,3-7-P05-1067,bq We evaluate the <term> outputs </term> of our <term> MT system </term> using the <term> NIST and Bleu automatic MT evaluation software </term> .
other,21-3-N03-1001,bq In our method , <term> unsupervised training </term> is first used to train a <term> phone n-gram model </term> for a particular <term> domain </term> ; the <term> output </term> of <term> recognition </term> with this <term> model </term> is then passed to a <term> phone-string classifier </term> .
hide detail