probabilistic context-free grammars </term> working in a ' synchronous ' way . Two <term> hardness
Generation of Referring Expressions </term> : ( a ) <term> numeric-valued attributes </term>
</term> achieved 89.75 % <term> F-measure </term> , a 13 % relative decrease in <term> F-measure
automatically acquiring new <term> stems </term> from a 155 million <term> word </term><term> unsegmented
features </term> . The best system obtains a 18.6 % improvement over the <term> baseline
questions correctly answered </term> , and a 32.8 % improvement according to the <term>
<term> answer resolution algorithm </term> show a 35.0 % relative improvement over our <term>
accuracy </term> rate from 60 % to 75 % , a 37 % reduction in error . We discuss <term>
<term> error rate </term> dropped to 4.1 % --- a 45 % reduction in <term> error </term> compared
</term> . The models were constructed using a 5K <term> vocabulary </term> and trained using
recognition system </term> , experiments show a 7 % improvement in <term> recognition accuracy
<term> SI recognition </term> , we achieved a 7.5 % <term> word error rate </term> on a standard
<term> vocabulary </term> and trained using a 76 million <term> word </term><term> Wall Street
, the resulting <term> tagger </term> gives a 97.24 % <term> accuracy </term> on the <term>
</term> . The <term> model </term> is based on a <term> balance matching operation </term> for
combined the <term> log-likelihood </term> under a <term> baseline model </term> ( that of <term>
as either coherent or incoherent ( given a <term> baseline </term> of 54.55 % ) . We propose
the system yields higher performance than a <term> baseline </term> on all three aspects
contains a <term> recognition network </term> , a <term> basic mapping </term> , additional <term>
semantic constraints </term> and thus provide a basis for a useful <term> disambiguation
hide detail