other,11-3-I05-6011,bq This <term> referential information </term> is vital for resolving <term> zero pronouns </term> and improving <term> machine translation outputs </term> .
other,30-2-H05-2007,bq We incorporate this analysis into a <term> diagnostic tool </term> intended for <term> developers </term> of <term> machine translation systems </term> , and demonstrate how our application can be used by <term> developers </term> to explore <term> patterns </term> in <term> machine translation output </term> .
This paper presents a new <term> interactive disambiguation scheme </term> based on the <term> paraphrasing </term> of a <term> parser </term> 's multiple output .
other,9-4-E06-1035,bq We then explore the impact on <term> performance </term> of using <term> ASR output </term> as opposed to <term> human transcription </term> .
The <term> non-deterministic parsing choices </term> of the <term> main parser </term> for a <term> language L </term> are directed by a <term> guide </term> which uses the <term> shared derivation forest </term> output by a prior <term> RCL parser </term> for a suitable <term> superset of L.
In this paper we show how two standard outputs from <term> information extraction ( IE ) systems </term> - <term> named entity annotations </term> and <term> scenario templates </term> - can be used to enhance access to <term> text collections </term> via a standard <term> text browser </term> .
measure(ment),6-3-H05-1117,bq The lack of automatic <term> methods </term> for <term> scoring system output </term> is an impediment to progress in the field , which we address with this work .
This article considers approaches which rerank the output of an existing <term> probabilistic parser </term> .
other,38-1-N03-1018,bq In this paper , we introduce a <term> generative probabilistic optical character recognition ( OCR ) model </term> that describes an end-to-end process in the <term> noisy channel framework </term> , progressing from generation of <term> true text </term> through its transformation into the <term> noisy output </term> of an <term> OCR system </term> .
other,28-1-H01-1042,bq The purpose of this research is to test the efficacy of applying <term> automated evaluation techniques </term> , originally devised for the <term> evaluation </term> of <term> human language learners </term> , to the <term> output </term> of <term> machine translation ( MT ) systems </term> .
The scheme was implemented by gathering <term> statistics </term> on the output of other <term> linguistic tools </term> .
tech,3-7-P05-1067,bq We evaluate the <term> outputs </term> of our <term> MT system </term> using the <term> NIST and Bleu automatic MT evaluation software </term> .
other,21-3-N03-1001,bq In our method , <term> unsupervised training </term> is first used to train a <term> phone n-gram model </term> for a particular <term> domain </term> ; the <term> output </term> of <term> recognition </term> with this <term> model </term> is then passed to a <term> phone-string classifier </term> .
other,18-1-P06-4014,bq The <term> LOGON MT demonstrator </term> assembles independently valuable <term> general-purpose NLP components </term> into a <term> machine translation pipeline </term> that capitalizes on <term> output quality </term> .
tech,26-2-N03-1026,bq Our <term> system </term> incorporates a <term> linguistic parser/generator </term> for <term> LFG </term> , a <term> transfer component </term> for <term> parse reduction </term> operating on <term> packed parse forests </term> , and a <term> maximum-entropy model </term> for <term> stochastic output selection </term> .
Second , the <term> sentence-plan-ranker ( SPR ) </term> ranks the list of output <term> sentence plans </term> , and then selects the top-ranked <term> plan </term> .
other,18-6-H01-1041,bq Having been trained on <term> Korean newspaper articles </term> on missiles and chemical biological warfare , the <term> system </term> produces the <term> translation output </term> sufficient for content understanding of the <term> original document </term> .
We consider the case of <term> multi-document summarization </term> , where the input <term> documents </term> are in <term> Arabic </term> , and the output <term> summary </term> is in <term> English </term> .
The use of <term> BLEU </term> at the <term> character </term> level eliminates the <term> word segmentation problem </term> : it makes it possible to directly compare commercial <term> systems </term> outputting <term> unsegmented texts </term> with , for instance , <term> statistical MT systems </term> which usually segment their <term> outputs </term> .
other,12-4-P05-2016,bq We also refer to an <term> evaluation method </term> and plan to compare our <term> system 's output </term> with a <term> benchmark system </term> .
hide detail