other,9-6-E06-1035,bq We also find that the <term> transcription errors </term> inevitable in <term> ASR output </term> have a negative impact on models that combine <term> lexical-cohesion and conversational features </term> , but do not change the general preference of approach for the two tasks .
other,9-4-E06-1035,bq We then explore the impact on <term> performance </term> of using <term> ASR output </term> as opposed to <term> human transcription </term> .
The <term> non-deterministic parsing choices </term> of the <term> main parser </term> for a <term> language L </term> are directed by a <term> guide </term> which uses the <term> shared derivation forest </term> output by a prior <term> RCL parser </term> for a suitable <term> superset of L.
other,16-3-H01-1042,bq This , the first experiment in a series of experiments , looks at the <term> intelligibility </term> of <term> MT output </term> .
This paper presents a new <term> interactive disambiguation scheme </term> based on the <term> paraphrasing </term> of a <term> parser </term> 's multiple output .
other,38-1-N03-1018,bq In this paper , we introduce a <term> generative probabilistic optical character recognition ( OCR ) model </term> that describes an end-to-end process in the <term> noisy channel framework </term> , progressing from generation of <term> true text </term> through its transformation into the <term> noisy output </term> of an <term> OCR system </term> .
Second , the <term> sentence-plan-ranker ( SPR ) </term> ranks the list of output <term> sentence plans </term> , and then selects the top-ranked <term> plan </term> .
other,18-1-P06-4014,bq The <term> LOGON MT demonstrator </term> assembles independently valuable <term> general-purpose NLP components </term> into a <term> machine translation pipeline </term> that capitalizes on <term> output quality </term> .
other,12-4-P05-2016,bq We also refer to an <term> evaluation method </term> and plan to compare our <term> system 's output </term> with a <term> benchmark system </term> .
The subjects were given three minutes per extract to determine whether they believed the sample output to be an <term> expert human translation </term> or a <term> machine translation </term> .
In this paper we show how two standard outputs from <term> information extraction ( IE ) systems </term> - <term> named entity annotations </term> and <term> scenario templates </term> - can be used to enhance access to <term> text collections </term> via a standard <term> text browser </term> .
tech,26-2-N03-1026,bq Our <term> system </term> incorporates a <term> linguistic parser/generator </term> for <term> LFG </term> , a <term> transfer component </term> for <term> parse reduction </term> operating on <term> packed parse forests </term> , and a <term> maximum-entropy model </term> for <term> stochastic output selection </term> .
measure(ment),6-3-H05-1117,bq The lack of automatic <term> methods </term> for <term> scoring system output </term> is an impediment to progress in the field , which we address with this work .
other,15-5-N03-1026,bq Overall <term> summarization </term> quality of the proposed <term> system </term> is state-of-the-art , with guaranteed <term> grammaticality </term> of the <term> system output </term> due to the use of a <term> constraint-based parser/generator </term> .
The use of <term> BLEU </term> at the <term> character </term> level eliminates the <term> word segmentation problem </term> : it makes it possible to directly compare commercial <term> systems </term> outputting <term> unsegmented texts </term> with , for instance , <term> statistical MT systems </term> which usually segment their <term> outputs </term> .
other,21-3-N03-1001,bq In our method , <term> unsupervised training </term> is first used to train a <term> phone n-gram model </term> for a particular <term> domain </term> ; the <term> output </term> of <term> recognition </term> with this <term> model </term> is then passed to a <term> phone-string classifier </term> .
We consider the case of <term> multi-document summarization </term> , where the input <term> documents </term> are in <term> Arabic </term> , and the output <term> summary </term> is in <term> English </term> .
tech,3-7-P05-1067,bq We evaluate the <term> outputs </term> of our <term> MT system </term> using the <term> NIST and Bleu automatic MT evaluation software </term> .
The scheme was implemented by gathering <term> statistics </term> on the output of other <term> linguistic tools </term> .
other,16-2-N03-1018,bq The <term> model </term> is designed for use in <term> error correction </term> , with a focus on <term> post-processing </term> the <term> output </term> of black-box <term> OCR systems </term> in order to make it more useful for <term> NLP tasks </term> .
hide detail